Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less
Topographic correction realization based on the CBERS-02B image
NASA Astrophysics Data System (ADS)
Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua
2011-08-01
The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.
Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging
NASA Astrophysics Data System (ADS)
Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.
1997-02-01
Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.
Library based x-ray scatter correction for dedicated cone beam breast CT
Shi, Linxi; Karellas, Andrew; Zhu, Lei
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Library based x-ray scatter correction for dedicated cone beam breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
A model-based scatter artifacts correction for cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Wei; Zhu, Jun; Wang, Luyao
2016-04-15
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.
2012-07-01
Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.
[Atmospheric correction of HJ-1 CCD data for water imagery based on dark object model].
Zhou, Li-Guo; Ma, Wei-Chun; Gu, Wan-Hua; Huai, Hong-Yan
2011-08-01
The CCD multi-band data of HJ-1A has great potential in inland water quality monitoring, but the precision of atmospheric correction is a premise and necessary procedure for its application. In this paper, a method based on dark pixel for water-leaving radiance retrieving is proposed. Beside the Rayleigh scattering, the aerosol scattering is important to atmospheric correction, the water quality of inland lakes always are case II water and the value of water leaving radiance is not zero. So the synchronous MODIS shortwave infrared data was used to obtain the aerosol parameters, and in virtue of the characteristic that aerosol scattering is relative stabilized in 560 nm, the water-leaving radiance for each visible and near infrared band were retrieved and normalized, accordingly the remotely sensed reflectance of water was computed. The results show that the atmospheric correction method based on the imagery itself is more effective for the retrieval of water parameters for HJ-1A CCD data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
2016-06-15
Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, B; Liu, S; Zhang, T
2016-06-15
Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong
2017-03-01
Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
NASA Astrophysics Data System (ADS)
Konik, Arda Bekir
Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
NASA Astrophysics Data System (ADS)
He, Xiao Dong
This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.
NASA Astrophysics Data System (ADS)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
Parameterization of the Van Hove dynamic self-scattering law Ss(Q,omega)
NASA Astrophysics Data System (ADS)
Zetterstrom, P.
In this paper we present a model of the Van Hove dynamic scattering law SME(Q, omega) based on the maximum entropy principle which is developed for the first time. The model is aimed to be used in the calculation of inelastic corrections to neutron diffraction data. The model is constrained by the first and second frequency moments and detailed balance, but can be expanded to an arbitrary number of frequency moments. The second moment can be varied by an effective temperature to account for the kinetic energy of the atoms. The results are compared with a diffusion model of the scattering law. Finally some calculations of the inelastic self-scattering for a time-of-flight diffractometer are presented. From this we show that the inelastic self-scattering is very sensitive to the details of the dynamic scattering law.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
NASA Astrophysics Data System (ADS)
Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert
2018-04-01
Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
An empirical model for polarized and cross-polarized scattering from a vegetation layer
NASA Technical Reports Server (NTRS)
Liu, H. L.; Fung, A. K.
1988-01-01
An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.
Atmospheric correction for inland water based on Gordon model
NASA Astrophysics Data System (ADS)
Li, Yunmei; Wang, Haijun; Huang, Jiazhu
2008-04-01
Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, S; Ahmad, S; Chen, Y
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less
Guide-star-based computational adaptive optics for broadband interferometric tomography
Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.
2012-01-01
We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Sisniega, A; Zbijewski, W
Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less
Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.
Pourmoghaddas, Amir; Wells, R Glenn
2016-11-01
Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for 99m Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter projections (120 ± 6 keV ) were also extracted from the acquired list-mode SPECT data. Either APD or DEW scatter projections were subtracted from corresponding 140 keV measured projections and then reconstructed with AC (APD-SC and DEW-SC). Quantitative accuracy of the activity measured in the heart for the APD-SC and DEW-SC images was assessed against dose calibrator measurements. The difference between modeled and acquired projections was measured as the root-mean-squared-error (RMSE). APD-modeled projections for a clinical cardiac study were also evaluated. APD-modeled projections showed good agreement with SPECT measurements and had reduced noise compared to DEW scatter estimates. APD-SC reduced mean error in activity measurement compared to DEW-SC in images and the reduction was statistically significant where the scatter fraction (SF) was large (mean SF = 28.5%, T-test p = 0.007). APD-SC reduced measurement uncertainties as well; however, the difference was not found to be statistically significant (F-test p > 0.5). RMSE comparisons showed that elevated levels of scatter did not significantly contribute to a change in RMSE (p > 0.2). Model-based APD scatter estimation is feasible for dedicated cardiac SPECT scanners with pinhole collimators. APD-SC images performed better than DEW-SC images and improved the accuracy of activity measurement in high-scatter scenarios.
Modeling boundary measurements of scattered light using the corrected diffusion approximation
Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.
2012-01-01
We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102
Asymmetric dark matter and CP violating scatterings in a UV complete model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldes, Iason; Bell, Nicole F.; Millar, Alexander J.
We explore possible asymmetric dark matter models using CP violating scatterings to generate an asymmetry. In particular, we introduce a new model, based on DM fields coupling to the SM Higgs and lepton doublets, a neutrino portal, and explore its UV completions. We study the CP violation and asymmetry formation of this model, to demonstrate that it is capable of producing the correct abundance of dark matter and the observed matter-antimatter asymmetry. Crucial to achieving this is the introduction of interactions which violate CP with a T{sup 2} dependence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Characterization and correction of cupping effect artefacts in cone beam CT
Hunter, AK; McDavid, WD
2012-01-01
Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754
Absorption and scattering of light by nonspherical particles. [in atmosphere
NASA Technical Reports Server (NTRS)
Bohren, C. F.
1986-01-01
Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.
[Spectral scatter correction of coal samples based on quasi-linear local weighted method].
Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng
2014-07-01
The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.
Lee, Ho; Fahimian, Benjamin P; Xing, Lei
2017-03-21
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
NASA Astrophysics Data System (ADS)
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich
2016-06-01
The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Infrared weak corrections to strongly interacting gauge boson scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciafaloni, Paolo; Urbano, Alfredo
2010-04-15
We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
NASA Astrophysics Data System (ADS)
Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.
2013-03-01
In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.
Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim
2015-06-21
The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S; Wang, Y; Lue, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less
Coherent beam control through inhomogeneous media in multi-photon microscopy
NASA Astrophysics Data System (ADS)
Paudel, Hari Prasad
Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Liu, T; Dong, X
Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less
Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media
NASA Astrophysics Data System (ADS)
Ito, G.; Mishchenko, M. I.; Glotch, T. D.
2017-12-01
Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.
Data-driven sensitivity inference for Thomson scattering electron density measurement systems.
Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro
2017-01-01
We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.
Stochastic analysis of surface roughness models in quantum wires
NASA Astrophysics Data System (ADS)
Nedjalkov, Mihail; Ellinghaus, Paul; Weinbub, Josef; Sadi, Toufik; Asenov, Asen; Dimov, Ivan; Selberherr, Siegfried
2018-07-01
We present a signed particle computational approach for the Wigner transport model and use it to analyze the electron state dynamics in quantum wires focusing on the effect of surface roughness. Usually surface roughness is considered as a scattering model, accounted for by the Fermi Golden Rule, which relies on approximations like statistical averaging and in the case of quantum wires incorporates quantum corrections based on the mode space approach. We provide a novel computational approach to enable physical analysis of these assumptions in terms of phase space and particles. Utilized is the signed particles model of Wigner evolution, which, besides providing a full quantum description of the electron dynamics, enables intuitive insights into the processes of tunneling, which govern the physical evolution. It is shown that the basic assumptions of the quantum-corrected scattering model correspond to the quantum behavior of the electron system. Of particular importance is the distribution of the density: Due to the quantum confinement, electrons are kept away from the walls, which is in contrast to the classical scattering model. Further quantum effects are retardation of the electron dynamics and quantum reflection. Far from equilibrium the assumption of homogeneous conditions along the wire breaks even in the case of ideal wire walls.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, A; Casares-Magaz, O; Elstroem, U
Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
NASA Technical Reports Server (NTRS)
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
Alterations to the relativistic Love-Franey model and their application to inelastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeile, J.R.
The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less
A model of primary and scattered photon fluence for mammographic x-ray image quantification
NASA Astrophysics Data System (ADS)
Tromans, Christopher E.; Cocker, Mary R.; Brady, Michael, Sir
2012-10-01
We present an efficient method to calculate the primary and scattered x-ray photon fluence component of a mammographic image. This can be used for a range of clinically important purposes, including estimation of breast density, personalized image display, and quantitative mammogram analysis. The method is based on models of: the x-ray tube; the digital detector; and a novel ray tracer which models the diverging beam emanating from the focal spot. The tube model includes consideration of the anode heel effect, and empirical corrections for wear and manufacturing tolerances. The detector model is empirical, being based on a family of transfer functions that cover the range of beam qualities and compressed breast thicknesses which are encountered clinically. The scatter estimation utilizes optimal information sampling and interpolation (to yield a clinical usable computation time) of scatter calculated using fundamental physics relations. A scatter kernel arising around each primary ray is calculated, and these are summed by superposition to form the scatter image. Beam quality, spatial position in the field (in particular that arising at the air-boundary due to the depletion of scatter contribution from the surroundings), and the possible presence of a grid, are considered, as is tissue composition using an iterative refinement procedure. We present numerous validation results that use a purpose designed tissue equivalent step wedge phantom. The average differences between actual acquisitions and modelled pixel intensities observed across the adipose to fibroglandular attenuation range vary between 5% and 7%, depending on beam quality and, for a single beam quality are 2.09% and 3.36% respectively with and without a grid.
Elastic electron scattering from the DNA bases cytosine and thymine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colyer, C. J.; Bellm, S. M.; Lohmann, B.
2011-10-15
Cross-section data for electron scattering from biologically relevant molecules are important for the modeling of energy deposition in living tissue. Relative elastic differential cross sections have been measured for cytosine and thymine using the crossed-beam method. These measurements have been performed for six discrete electron energies between 60 and 500 eV and for detection angles between 15 deg. and 130 deg. Calculations have been performed via the screen-corrected additivity rule method and are in good agreement with the present experiment.
Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T
2006-01-21
Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-01
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-29
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Analysis of position-dependent Compton scatter in scintimammography with mild compression
NASA Astrophysics Data System (ADS)
Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.
2003-10-01
In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.
Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering
NASA Astrophysics Data System (ADS)
Engle, B. J.; Roberts, R. A.; Grandin, R. J.
2018-04-01
This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.
Improved scatter correction with factor analysis for planar and SPECT imaging
NASA Astrophysics Data System (ADS)
Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw
2017-09-01
Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.
Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki
2016-02-01
Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Radiative corrections to elastic proton-electron scattering measured in coincidence
NASA Astrophysics Data System (ADS)
Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.
2017-05-01
The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
NASA Astrophysics Data System (ADS)
Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.
2016-12-01
Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.
Analytic Scattering and Refraction Models for Exoplanet Transit Spectra
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.
2017-12-01
Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.
Spectral peculiarities of electromagnetic wave scattering by Veselago's cylinders
NASA Astrophysics Data System (ADS)
Sukhov, S. V.; Shevyakhov, N. S.
2006-03-01
The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.
Spectral peculiarities of electromagnetic wave scattered by Veselago's cylinders
NASA Astrophysics Data System (ADS)
Sukhov, S. V.; Shevyakhov, N. S.
2005-09-01
The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMAmore » phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter correction, reducing uncertainties introduced by scatter correction algorithms. Funding provided by NIH/NIBIB grant R01EB013677; Todd Peterson, Ph.D., has had a research contract with PHDs Co., Knoxville, TN.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawke, J.; Scannell, R.; Maslov, M.
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less
Scattering analysis of LOFAR pulsar observations
NASA Astrophysics Data System (ADS)
Geyer, M.; Karastergiou, A.; Kondratiev, V. I.; Zagkouris, K.; Kramer, M.; Stappers, B. W.; Grießmeier, J.-M.; Hessels, J. W. T.; Michilli, D.; Pilia, M.; Sobey, C.
2017-09-01
We measure the effects of interstellar scattering on average pulse profiles from 13 radio pulsars with simple pulse shapes. We use data from the LOFAR High Band Antennas, at frequencies between 110 and 190 MHz. We apply a forward fitting technique, and simultaneously determine the intrinsic pulse shape, assuming single Gaussian component profiles. We find that the constant τ, associated with scattering by a single thin screen, has a power-law dependence on frequency τ ∝ ν-α, with indices ranging from α = 1.50 to 4.0, despite simplest theoretical models predicting α = 4.0 or 4.4. Modelling the screen as an isotropic or extremely anisotropic scatterer, we find anisotropic scattering fits lead to larger power-law indices, often in better agreement with theoretically expected values. We compare the scattering models based on the inferred, frequency-dependent parameters of the intrinsic pulse, and the resulting correction to the dispersion measure (DM). We highlight the cases in which fits of extreme anisotropic scattering are appealing, while stressing that the data do not strictly favour either model for any of the 13 pulsars. The pulsars show anomalous scattering properties that are consistent with finite scattering screens and/or anisotropy, but these data alone do not provide the means for an unambiguous characterization of the screens. We revisit the empirical τ versus DM relation and consider how our results support a frequency dependence of α. Very long baseline interferometry, and observations of the scattering and scintillation properties of these sources at higher frequencies, will provide further evidence.
Chiral symmetry constraints on resonant amplitudes
NASA Astrophysics Data System (ADS)
Bruns, Peter C.; Mai, Maxim
2018-03-01
We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.
Vibronic coupling simulations for linear and nonlinear optical processes: Simulation results
NASA Astrophysics Data System (ADS)
Silverstein, Daniel W.; Jensen, Lasse
2012-02-01
A vibronic coupling model based on time-dependent wavepacket approach is applied to simulate linear optical processes, such as one-photon absorbance and resonance Raman scattering, and nonlinear optical processes, such as two-photon absorbance and resonance hyper-Raman scattering, on a series of small molecules. Simulations employing both the long-range corrected approach in density functional theory and coupled cluster are compared and also examined based on available experimental data. Although many of the small molecules are prone to anharmonicity in their potential energy surfaces, the harmonic approach performs adequately. A detailed discussion of the non-Condon effects is illustrated by the molecules presented in this work. Linear and nonlinear Raman scattering simulations allow for the quantification of interference between the Franck-Condon and Herzberg-Teller terms for different molecules.
Solar Cycle Variability and Grand Minima Induced by Joy's Law Scatter
NASA Astrophysics Data System (ADS)
Karak, Bidya Binay; Miesch, Mark S.
2017-08-01
The strength of the solar cycle varies from one cycle to another in an irregular manner and the extreme example of this irregularity is the Maunder minimum when Sun produced only a few spots for several years. We explore the cause of these variabilities using a 3D Babcock--Leighton dynamo. In this model, based on the toroidal flux at the base of the convection zone, bipolar magnetic regions (BMRs) are produced with flux, tilt angle, and time of emergence all obtain from their observed distributions. The dynamo growth is limited by a tilt quenching.The randomnesses in the BMR emergences make the poloidal field unequal and eventually cause an unequal solar cycle. When observed fluctuations of BMR tilts around Joy's law, i.e., a standard deviation of 15 degrees, are considered, our model produces a variation in the solar cycle comparable to the observed solar cycle variability. Tilt scatter also causes occasional Maunder-like grand minima, although the observed scatter does not reproduce correct statistics of grand minima. However, when we double the tilt scatter, we find grand minima consistent with observations. Importantly, our dynamo model can operate even during grand minima with only a few BMRs, without requiring any additional alpha effect.
Cropper, Paul M; Hansen, Jaron C; Eatough, Delbert J
2013-09-01
The US. Environmental Protection Agency (EPA) has proposed a new secondary standard based on visibility in urban areas. The proposed standard will be based on light extinction, calculated from 24-hr averaged measurements. It would be desirable to base the standard on a shorter averaging time to better represent human perception of visibility This could be accomplished by either an estimation of extinction from semicontinuous particulate matter (PM) data or direct measurement of scattering and absorption. To this end we have compared 1-hr measurements of fine plus coarse particulate scattering using a nephelometer along with an estimate of absorption from aethalometer measurements. The study took place in Lindon, UT, during February and March 2012. The nephelometer measurements were corrected for coarse particle scattering and compared to the Filter Dynamic Measurement System (FDMS) tapered element oscillating microbalance monitor (TEOM) PM2.5 measurements. The two measurements agreed with a mass scattering coefficient of 3.3 +/- 0.3 m2/g at relative humidity below 80%. However at higher humidity, the nephelometer gave higher scattering results due to water absorbed by ammonium nitrate and ammonium sulfate in the particles. This particle-associated water is not measured by the FDMS TEOM. The FDMS TEOM data could be corrected for this difference using appropriate IMPROVE protocols if the particle composition is known. However a better approach may be to use a particle measurement system that allows for semicontinuous measurements but also measures particle bound water Data are presented from a 2003 study in Rubidoux, CA, showing how this could be accomplished using a Grimm model 1100 aerosol spectrometer or comparable instrument.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.
Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T
2017-01-01
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI
Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc
2017-01-01
Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656
Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.
NASA Technical Reports Server (NTRS)
Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.
1972-01-01
The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
Dittrich, Birger; Wandtke, Claudia M; Meents, Alke; Pröpper, Kevin; Mondal, Kartik Chandra; Samuel, Prinson P; Amin Sk, Nurul; Singh, Amit Pratap; Roesky, Herbert W; Sidhu, Navdeep
2015-02-02
Single-crystal X-ray diffraction (XRD) is often considered the gold standard in analytical chemistry, as it allows element identification as well as determination of atom connectivity and the solid-state structure of completely unknown samples. Element assignment is based on the number of electrons of an atom, so that a distinction of neighboring heavier elements in the periodic table by XRD is often difficult. A computationally efficient procedure for aspherical-atom least-squares refinement of conventional diffraction data of organometallic compounds is proposed. The iterative procedure is conceptually similar to Hirshfeld-atom refinement (Acta Crystallogr. Sect. A- 2008, 64, 383-393; IUCrJ. 2014, 1,61-79), but it relies on tabulated invariom scattering factors (Acta Crystallogr. Sect. B- 2013, 69, 91-104) and the Hansen/Coppens multipole model; disordered structures can be handled as well. Five linear-coordinate 3d metal complexes, for which the wrong element is found if standard independent-atom model scattering factors are relied upon, are studied, and it is shown that only aspherical-atom scattering factors allow a reliable assignment. The influence of anomalous dispersion in identifying the correct element is investigated and discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Operational atmospheric correction of AVHRR visible and infrared data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermote, E.; El Saleous, N.; Roger, J.C.
1995-12-31
The satellite level radiance is affected by the presence of the atmosphere between the sensor and the target. The ozone and water vapor absorption bands affect the signal recorded by the AVHRR visible and near infrared channels respectively. The Rayleigh scattering mainly affects the visible channel and is more pronounced when dealing with small sun elevations and large view angles. The aerosol scattering affects both channels and is certainly the most challenging term for atmospheric correction because of the spatial and temporal variability of both the type and amount of particles in the atmosphere. This paper presents the equation ofmore » the satellite signal, the scheme to retrieve atmospheric properties and corrections applied to AVHRR observations. The operational process uses TOMS data and a digital elevation model to correct for ozone absorption and rayleigh scattering. The water vapor content is evaluated using the split-window technique that is validated over ocean using 1988 SSM/I data. The aerosol amount retrieval over Ocean is achieved in channels 1 and 2 and compared to sun photometer observations to check consistency of the radiative transfer model and the sensor calibration. Over land, the method developed uses reflectance at 3.75 microns to deduce target reflectance in channel 1 and retrieve aerosol optical thickness that can be extrapolated in channel 2. The method to invert the reflectance at 3.75 microns is based on MODTRAN simulations and is validated by comparison to measurements performed during FIFE 87. Finally, aerosol optical thickness retrieved over Brazil and Eastern US is compared to sun photometer measurements.« less
NASA Astrophysics Data System (ADS)
Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing
2012-06-01
We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.
A new approach to correct for absorbing aerosols in OMI UV
NASA Astrophysics Data System (ADS)
Arola, A.; Kazadzis, S.; Lindfors, A.; Krotkov, N.; Kujanpää, J.; Tamminen, J.; Bais, A.; di Sarra, A.; Villaplana, J. M.; Brogniez, C.; Siani, A. M.; Janouch, M.; Weihs, P.; Webb, A.; Koskela, T.; Kouremeti, N.; Meloni, D.; Buchard, V.; Auriol, F.; Ialongo, I.; Staneck, M.; Simic, S.; Smedley, A.; Kinne, S.
2009-11-01
Several validation studies of surface UV irradiance based on the Ozone Monitoring Instrument (OMI) satellite data have shown a high correlation with ground-based measurements but a positive bias in many locations. The main part of the bias can be attributed to the boundary layer aerosol absorption that is not accounted for in the current satellite UV algorithms. To correct for this shortfall, a post-correction procedure was applied, based on global climatological fields of aerosol absorption optical depth. These fields were obtained by using global aerosol optical depth and aerosol single scattering albedo data assembled by combining global aerosol model data and ground-based aerosol measurements from AERONET. The resulting improvements in the satellite-based surface UV irradiance were evaluated by comparing satellite and ground-based spectral irradiances at various European UV monitoring sites. The results generally showed a significantly reduced bias by 5-20%, a lower variability, and an unchanged, high correlation coefficient.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less
Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited
Farmer, William A.; Friedman, Alex
2015-06-18
Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less
A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao
2018-05-01
This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.
Scattering of Acoustic Waves from Ocean Boundaries
2015-09-30
of buried mines and improve SONAR performance in shallow water. OBJECTIVES 1) Determination of the correct physical model of acoustic propagation... acoustic parameters in the ocean. APPROACH 1) Finite Element Modeling for Range Dependent Waveguides: Finite element modeling is applied to a...roughness measurements for reverberation modeling . GLISTEN data provide insight into the role of biology on acoustic propagation and scattering
Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H
2018-06-06
Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
Norton, G V; Novarini, J C
2007-06-01
Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.
Theoretical model of x-ray scattering as a dense matter probe.
Gregori, G; Glenzer, S H; Rozmus, W; Lee, R W; Landen, O L
2003-02-01
We present analytical expressions for the dynamic structure factor, or form factor S(k,omega), which is the quantity describing the x-ray cross section from a dense plasma or a simple liquid. Our results, based on the random phase approximation for the treatment on the charged particle coupling, can be applied to describe scattering from either weakly coupled classical plasmas or degenerate electron liquids. Our form factor correctly reproduces the Compton energy down-shift and the known Fermi-Dirac electron velocity distribution for S(k,omega) in the case of a cold degenerate plasma. The usual concept of scattering parameter is also reinterpreted for the degenerate case in order to include the effect of the Thomas-Fermi screening. The results shown in this work can be applied to interpreting x-ray scattering in warm dense plasmas occurring in inertial confinement fusion experiments or for the modeling of solid density matter found in the interior of planets.
Vertical spatial coherence model for a transient signal forward-scattered from the sea surface
Yoerger, E.J.; McDaniel, S.T.
1996-01-01
The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Nathan L.; Blunden, Peter G.; Melnitchouk, Wally
2015-12-08
We examine the interference \\gamma Z box corrections to parity-violating elastic electron--proton scattering in the light of the recent observation of quark-hadron duality in parity-violating deep-inelastic scattering from the deuteron, and the approximate isospin independence of duality in the electromagnetic nucleon structure functions down to Q 2 \\approx 1 GeV 2. Assuming that a similar behavior also holds for the \\gamma Z proton structure functions, we find that duality constrains the γ Z box correction to the proton's weak charge to be Re V γ Z V = (5.4 \\pm 0.4) \\times 10 -3 at the kinematics of the Qmore » weak experiment. Within the same model we also provide estimates of the γ Z corrections for future parity-violating experiments, such as MOLLER at Jefferson Lab and MESA at Mainz.« less
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
Acoustic classification of zooplankton
NASA Astrophysics Data System (ADS)
Martin Traykovski, Linda V.
1998-11-01
Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Valenti, M.
2009-12-01
Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
Simulation of inverse Compton scattering and its implications on the scattered linewidth
NASA Astrophysics Data System (ADS)
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; ...
2018-03-06
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less
Unified connected theory of few-body reaction mechanisms in N-body scattering theory
NASA Technical Reports Server (NTRS)
Polyzou, W. N.; Redish, E. F.
1978-01-01
A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldes, Iason; Bell, Nicole F.; Millar, Alexander J.
We explore possible asymmetric dark matter models using CP violating scatterings to generate an asymmetry. In particular, we introduce a new model, based on DM fields coupling to the SM Higgs and lepton doublets, a neutrino portal, and explore its UV completions. We study the CP violation and asymmetry formation of this model, to demonstrate that it is capable of producing the correct abundance of dark matter and the observed matter-antimatter asymmetry. Crucial to achieving this is the introduction of interactions which violate CP with a T{sup 2} dependence.
Quasi-elastic nuclear scattering at high energies
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.
1992-01-01
The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.
NASA Technical Reports Server (NTRS)
Khandelwal, Govind S.; Khan, Ferdous
1989-01-01
An optical model description of energy and momentum transfer in relativistic heavy-ion collisions, based upon composite particle multiple scattering theory, is presented. Transverse and longitudinal momentum transfers to the projectile are shown to arise from the real and absorptive part of the optical potential, respectively. Comparisons of fragment momentum distribution observables with experiments are made and trends outlined based on our knowledge of the underlying nucleon-nucleon interaction. Corrections to the above calculations are discussed. Finally, use of the model as a tool for estimating collision impact parameters is indicated.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
NASA Astrophysics Data System (ADS)
Qattan, I. A.
2017-06-01
I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.
NASA Technical Reports Server (NTRS)
Mertens, C. J.; Xu, X.; Fernandez, J. R.; Bilitza, D.; Russell, J. M., III; Mlynczak, M. G.
2009-01-01
Auroral infrared emission observed from the TIMED/SABER broadband 4.3 micron channel is used to develop an empirical geomagnetic storm correction to the International Reference Ionosphere (IRI) E-region electron densities. The observation-based proxy used to develop the storm model is SABER-derived NO+(v) 4.3 micron volume emission rates (VER). A correction factor is defined as the ratio of storm-time NO+(v) 4.3 micron VER to a quiet-time climatological averaged NO+(v) 4.3 micron VER, which is linearly fit to available geomagnetic activity indices. The initial version of the E-region storm model, called STORM-E, is most applicable within the auroral oval region. The STORM-E predictions of E-region electron densities are compared to incoherent scatter radar electron density measurements during the Halloween 2003 storm events. Future STORM-E updates will extend the model outside the auroral oval.
NASA Astrophysics Data System (ADS)
Wang, Chao; Xiao, Jun; Luo, Xiaobing
2016-10-01
The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Trans-dimensional joint inversion of seabed scattering and reflection data.
Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2013-03-01
This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.
Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier
2016-01-01
Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419
Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.
Yang, Ching-Ching
2016-01-01
Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.
Stopping power of dense plasmas: The collisional method and limitations of the dielectric formalism.
Clauser, C F; Arista, N R
2018-02-01
We present a study of the stopping power of plasmas using two main approaches: the collisional (scattering theory) and the dielectric formalisms. In the former case, we use a semiclassical method based on quantum scattering theory. In the latter case, we use the full description given by the extension of the Lindhard dielectric function for plasmas of all degeneracies. We compare these two theories and show that the dielectric formalism has limitations when it is used for slow heavy ions or atoms in dense plasmas. We present a study of these limitations and show the regimes where the dielectric formalism can be used, with appropriate corrections to include the usual quantum and classical limits. On the other hand, the semiclassical method shows the correct behavior for all plasma conditions and projectile velocity and charge. We consider different models for the ion charge distributions, including bare and dressed ions as well as neutral atoms.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.
Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R
2016-10-01
In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.
Improving satellite retrievals of NO2 in biomass burning regions
NASA Astrophysics Data System (ADS)
Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.
2010-12-01
The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.
NASA Astrophysics Data System (ADS)
Sahu, Sanjay Kumar; Shanmugam, Palanisamy
2018-02-01
Scattering by water molecules and particulate matters determines the path and distance of photon propagation in underwater medium. Consequently, photon angle of scattering (given by scattering phase function) requires to be considered in addition to the extinction coefficient of the aquatic medium governed by the absorption and scattering coefficients in channel characterization for an underwater wireless optical communication (UWOC) system. This study focuses on analyzing the received signal power and impulse response of UWOC channel based on Monte-Carlo simulations for different water types, link distances, link geometries and transceiver parameters. A newly developed scattering phase function (referred to as SS phase function), which represents the real water types more accurately like the Petzold phase function, is considered for quantification of the channel characteristics along with the effects of absorption and scattering coefficients. A comparison between the results simulated using various phase function models and the experimental measurements of Petzold revealed that the SS phase function model predicts values closely matching with the actual values of the Petzold's phase function, which further establishes the importance of using a correct scattering phase function model while estimating the channel capacity of UWOC system in terms of the received power and channel impulse response. Results further demonstrate a great advantage of considering the nonzero probability of receiving scattered photons in estimating channel capacity rather than considering the reception of only ballistic photons as in Beer's Law, which severely underestimates the received power and affects the range of communication especially in the scattering water column. The received power computed based on the Monte-Carlo method by considering the receiver aperture sizes and field of views in different water types are further analyzed and discussed. These results are essential for evaluating the underwater link budget and constructing different system and design parameters for an UWOC system.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
Simulation-based artifact correction (SBAC) for metrological computed tomography
NASA Astrophysics Data System (ADS)
Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc
2017-06-01
Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.
NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements
NASA Technical Reports Server (NTRS)
Conel, J. E.
1985-01-01
Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.
Transient radiative transfer in a scattering slab considering polarization.
Yi, Hongliang; Ben, Xun; Tan, Heping
2013-11-04
The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.
Laser-plasma interactions in magnetized environment
NASA Astrophysics Data System (ADS)
Shi, Yuan; Qin, Hong; Fisch, Nathaniel J.
2018-05-01
Propagation and scattering of lasers present new phenomena and applications when the plasma medium becomes strongly magnetized. With mega-Gauss magnetic fields, scattering of optical lasers already becomes manifestly anisotropic. Special angles exist where coherent laser scattering is either enhanced or suppressed, as we demonstrate using a cold-fluid model. Consequently, by aiming laser beams at special angles, one may be able to optimize laser-plasma coupling in magnetized implosion experiments. In addition, magnetized scattering can be exploited to improve the performance of plasma-based laser pulse amplifiers. Using the magnetic field as an extra control variable, it is possible to produce optical pulses of higher intensity, as well as compress UV and soft x-ray pulses beyond the reach of other methods. In even stronger giga-Gauss magnetic fields, laser-plasma interaction enters a relativistic-quantum regime. Using quantum electrodynamics, we compute a modified wave dispersion relation, which enables correct interpretation of Faraday rotation measurements of strong magnetic fields.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Elastic/Inelastic Measurement Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, Steven; Hicks, Sally; Vanhoy, Jeffrey
2016-03-01
The work scope involves the measurement of neutron scattering from natural sodium ( 23Na) and two isotopes of iron, 56Fe and 54Fe. Angular distributions, i.e., differential cross sections, of the scattered neutrons will be measured for 5 to 10 incident neutron energies per year. The work of the first year concentrates on 23Na, while the enriched iron samples are procured. Differential neutron scattering cross sections provide information to guide nuclear reaction model calculations in the low-energy (few MeV) fast-neutron region. This region lies just above the isolated resonance region, which in general is well studied; however, model calculations are difficultmore » in this region because overlapping resonance structure is evident and direct nuclear reactions are becoming important. The standard optical model treatment exhibits good predictive ability for the wide-region average cross sections but cannot treat the overlapping resonance features. In addition, models that do predict the direct reaction component must be guided by measurements to describe correctly the strength of the direct component, e.g., β 2 must be known to describe the direct component of the scattering to the first excited state. Measurements of the elastic scattering differential cross sections guide the optical model calculations, while inelastic differential cross sections provide the crucial information for correctly describing the direct component. Activities occurring during the performance period are described.« less
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Stayman, J
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho; Xing Lei; Lee, Rena
2012-05-15
Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less
Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization
NASA Astrophysics Data System (ADS)
Xu, J.; Sisniega, A.; Zbijewski, W.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-04-01
Detection of acute intracranial hemorrhage (ICH) is important for diagnosis and treatment of traumatic brain injury, stroke, postoperative bleeding, and other head and neck injuries. This paper details the design and development of a cone-beam CT (CBCT) system developed specifically for the detection of low-contrast ICH in a form suitable for application at the point of care. Recognizing such a low-contrast imaging task to be a major challenge in CBCT, the system design began with a rigorous analysis of task-based detectability including critical aspects of system geometry, hardware configuration, and artifact correction. The imaging performance model described the three-dimensional (3D) noise-equivalent quanta using a cascaded systems model that included the effects of scatter, scatter correction, hardware considerations of complementary metal-oxide semiconductor (CMOS) and flat-panel detectors (FPDs), and digitization bit depth. The performance was analyzed with respect to a low-contrast (40-80 HU), medium-frequency task representing acute ICH detection. The task-based detectability index was computed using a non-prewhitening observer model. The optimization was performed with respect to four major design considerations: (1) system geometry (including source-to-detector distance (SDD) and source-to-axis distance (SAD)); (2) factors related to the x-ray source (including focal spot size, kVp, dose, and tube power); (3) scatter correction and selection of an antiscatter grid; and (4) x-ray detector configuration (including pixel size, additive electronics noise, field of view (FOV), and frame rate, including both CMOS and a-Si:H FPDs). Optimal design choices were also considered with respect to practical constraints and available hardware components. The model was verified in comparison to measurements on a CBCT imaging bench as a function of the numerous design parameters mentioned above. An extended geometry (SAD = 750 mm, SDD = 1100 mm) was found to be advantageous in terms of patient dose (20 mGy) and scatter reduction, while a more isocentric configuration (SAD = 550 mm, SDD = 1000 mm) was found to give a more compact and mechanically favorable configuration with minor tradeoff in detectability. An x-ray source with a 0.6 mm focal spot size provided the best compromise between spatial resolution requirements and x-ray tube power. Use of a modest anti-scatter grid (8:1 GR) at a 20 mGy dose provided slight improvement (~5-10%) in the detectability index, but the benefit was lost at reduced dose. The potential advantages of CMOS detectors over FPDs were quantified, showing that both detectors provided sufficient spatial resolution for ICH detection, while the former provided a potentially superior low-dose performance, and the latter provided the requisite FOV for volumetric imaging in a centered-detector geometry. Task-based imaging performance modeling provides an important starting point for CBCT system design, especially for the challenging task of ICH detection, which is somewhat beyond the capabilities of existing CBCT platforms. The model identifies important tradeoffs in system geometry and hardware configuration, and it supports the development of a dedicated CBCT system for point-of-care application. A prototype suitable for clinical studies is in development based on this analysis.
NASA Technical Reports Server (NTRS)
Walker, Eric L.
2005-01-01
Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
Neutron Angular Scatter Effects in 3DHZETRN: Quasi-Elastic
NASA Technical Reports Server (NTRS)
Wilson, John W.; Werneth, Charles M.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2017-01-01
The current 3DHZETRN code has a detailed three dimensional (3D) treatment of neutron transport based on a forward/isotropic assumption and has been compared to Monte Carlo (MC) simulation codes in various geometries. In most cases, it has been found that 3DHZETRN agrees with the MC codes to the extent they agree with each other. However, a recent study of neutron leakage from finite geometries revealed that further improvements to the 3DHZETRN formalism are needed. In the present report, angular scattering corrections to the neutron fluence are provided in an attempt to improve fluence estimates from a uniform sphere. It is found that further developments in the nuclear production models are required to fully evaluate the impact of transport model updates. A model for the quasi-elastic neutron production spectra is therefore developed and implemented into 3DHZETRN.
Assessment of the Subgrid-Scale Models at Low and High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Horiuti, K.
1996-01-01
Accurate SGS models must be capable of correctly representing the energy transfer between GS and SGS. Recent direct assessment of the energy transfer carried out using direct numerical simulation (DNS) data for wall-bounded flows revealed that the energy exchange is not unidirectional. Although GS kinetic energy is transferred to the SGS (forward scatter (F-scatter) on average, SGS energy is also transferred to the GS. The latter energy exchange (backward scatter (B-scatter) is very significant, i.e., the local energy exchange can be backward nearly as often as forward and the local rate of B-scatter is considerably higher than the net rate of energy dissipation.
Mbaye, Moussa; Diaw, Pape Abdoulaye; Gaye-Saye, Diabou; Le Jeune, Bernard; Cavalin, Goulven; Denis, Lydie; Aaron, Jean-Jacques; Delmas, Roger; Giamarchi, Philippe
2018-03-05
Permanent online monitoring of water supply pollution by hydrocarbons is needed for various industrial plants, to serve as an alert when thresholds are exceeded. Fluorescence spectroscopy is a suitable technique for this purpose due to its sensitivity and moderate cost. However, fluorescence measurements can be disturbed by the presence of suspended organic matter, which induces beam scattering and absorption, leading to an underestimation of hydrocarbon content. To overcome this problem, we propose an original technique of fluorescence spectra correction, based on a measure of the excitation beam scattering caused by suspended organic matter on the left side of the Rayleigh scattering spectral line. This correction allowed us to obtain a statistically validated estimate of the naphthalene content (used as representative of the polyaromatic hydrocarbon contamination), regardless of the amount of suspended organic matter in the sample. Moreover, it thus becomes possible, based on this correction, to estimate the amount of suspended organic matter. By this approach, the online warning system remains operational even when suspended organic matter is present in the water supply. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of PET projection data correction algorithm
NASA Astrophysics Data System (ADS)
Bazhanov, P. V.; Kotina, E. D.
2017-12-01
Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Active and Passive 3D Vector Radiative Transfer with Preferentially-Aligned Ice Particles
NASA Astrophysics Data System (ADS)
Adams, I. S.; Munchak, S. J.; Pelissier, C.; Kuo, K. S.; Heymsfield, G. M.
2017-12-01
To support the observation of clouds and precipitation using combinations of radars and radiometers, a forward model capable of representing diverse sensing geometries for active and passive instruments is necessary for correctly interpreting and consistently combining multi-sensor measurements from ground-based, airborne, and spaceborne platforms. As such, the Atmospheric Radiative Transfer Simulator (ARTS) uses Monte Carlo integration to produce radar reflectivities and radiometric brightness temperatures for three-dimensional cloud and precipitation input fields. This radiative transfer framework is capable of efficiently sampling Gaussian antenna beams and fully accounting for multiple scattering. By relying on common ray-tracing tools, gaseous absorption models, and scattering properties, the model reproduces accurate and consistent radar and radiometer observables. While such a framework is an important component for simulating remote sensing observables, the key driver for self-consistent radiative transfer calculations of clouds and precipitation is scattering data. Research over the past decade has demonstrated that spheroidal models of frozen hydrometeors cannot accurately reproduce all necessary scattering properties at all desired frequencies. The discrete dipole approximation offers flexibility in calculating scattering for arbitrary particle geometries, but at great computational expense. When considering scattering for certain pristine ice particles, the Extended Boundary Condition Method, or T-Matrix, is much more computationally efficient; however, convergence for T-Matrix calculations fails at large size parameters and high aspect ratios. To address these deficiencies, we implemented the Invariant Imbedding T-Matrix Method (IITM). A brief overview of ARTS and IITM will be given, including details for handling preferentially-aligned hydrometeors. Examples highlighting the performance of the model for simulating space-based and airborne measurements will be offered, and some case studies showing the response to particle type and orientation will be presented. Simulations of polarized radar (Z, LDR, ZDR) and radiometer (Stokes I and Q) quantities will be used to demonstrate the capabilities of the model.
Iterative atmospheric correction scheme and the polarization color of alpine snow
NASA Astrophysics Data System (ADS)
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-07-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories.In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction.In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft.The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.
Iterative Atmospheric Correction Scheme and the Polarization Color of Alpine Snow
NASA Technical Reports Server (NTRS)
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-01-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories. In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction. In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft. The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2017-01-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.
Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D
2017-05-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.
Atmospheric correction for remote sensing image based on multi-spectral information
NASA Astrophysics Data System (ADS)
Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen
2018-03-01
The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.
Generalized model screening potentials for Fermi-Dirac plasmas
NASA Astrophysics Data System (ADS)
Akbari-Moghanjoughi, M.
2016-04-01
In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald
2005-10-01
Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.
[Development of a Striatal and Skull Phantom for Quantitative 123I-FP-CIT SPECT].
Ishiguro, Masanobu; Uno, Masaki; Miyazaki, Takuma; Kataoka, Yumi; Toyama, Hiroshi; Ichihara, Takashi
123 Iodine-labelled N-(3-fluoropropyl) -2β-carbomethoxy-3β-(4-iodophenyl) nortropane ( 123 I-FP-CIT) single photon emission computerized tomography (SPECT) images are used for differential diagnosis such as Parkinson's disease (PD). Specific binding ratio (SBR) is affected by scattering and attenuation in SPECT imaging, because gender and age lead to changes in skull density. It is necessary to clarify and correct the influence of the phantom simulating the the skull. The purpose of this study was to develop phantoms that can evaluate scattering and attenuation correction. Skull phantoms were prepared based on the measuring the results of the average computed tomography (CT) value, average skull thickness of 12 males and 16 females. 123 I-FP-CIT SPECT imaging of striatal phantom was performed with these skull phantoms, which reproduced normal and PD. SPECT images, were reconstructed with scattering and attenuation correction. SBR with partial volume effect corrected (SBR act ) and conventional SBR (SBR Bolt ) were measured and compared. The striatum and the skull phantoms along with 123 I-FP-CIT were able to reproduce the normal accumulation and disease state of PD and further those reproduced the influence of skull density on SPECT imaging. The error rate with the true SBR, SBR act was much smaller than SBR Bolt . The effect on SBR could be corrected by scattering and attenuation correction even if the skull density changes with 123 I-FP-CIT on SPECT imaging. The combination of triple energy window method and CT-attenuation correction method would be the best correction method for SBR act .
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, C; Jin, M; Ouyang, L
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less
A curvature-corrected Kirchhoff formulation for radar sea-return from the near vertical
NASA Technical Reports Server (NTRS)
Jackson, F. C.
1974-01-01
A new theoretical treatment of the problem of electromagnetic wave scattering from a randomly rough surface is given. A high frequency correction to the Kirchhoff approximation is derived from a field integral equation for a perfectly conducting surface. The correction, which accounts for the effect of local surface curvature, is seen to be identical with an asymptotic form found by Fock (1945) for diffraction by a paraboloid. The corrected boundary values are substituted into the far field Stratton-Chu integral, and average backscattered powers are computed assuming the scattering surface is a homogeneous Gaussian process. Preliminary calculations for K(-4) ocean wave spectrum indicate a resonable modelling of polarization effects near the vertical, theta 45 deg. Correspondence with the results of small perturbation theory is shown.
Atmospheric Correction Algorithm for Hyperspectral Imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. J. Pollina
1999-09-01
In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less
a Phenomenological Determination of the Pion-Nucleon Scattering Lengths from Pionic Hydrogen
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Wycech, S.
A model independent expression for the electromagnetic corrections to a phenomenological hadronic pion-nucleon (πN) scattering length ah, extracted from pionic hydrogen, is obtained. In a non-relativistic approach and using an extended charge distribution, these corrections are derived up to terms of order α2 log α in the limit of a short-range hadronic interaction. We infer ahπ ^-p=0.0870(5)m-1π which gives for the πNN coupling through the GMO relation g2π ^± pn/(4π )=14.04(17).
NASA Astrophysics Data System (ADS)
Touch, M.; Clark, D. P.; Barber, W.; Badea, C. T.
2016-04-01
Spectral CT using a photon-counting x-ray detector (PCXD) can potentially increase accuracy of measuring tissue composition. However, PCXD spectral measurements suffer from distortion due to charge sharing, pulse pileup, and Kescape energy loss. This study proposes two novel artificial neural network (ANN)-based algorithms: one to model and compensate for the distortion, and another one to directly correct for the distortion. The ANN-based distortion model was obtained by training to learn the distortion from a set of projections with a calibration scan. The ANN distortion was then applied in the forward statistical model to compensate for distortion in the projection decomposition. ANN was also used to learn to correct distortions directly in projections. The resulting corrected projections were used for reconstructing the image, denoising via joint bilateral filtration, and decomposition into three-material basis functions: Compton scattering, the photoelectric effect, and iodine. The ANN-based distortion model proved to be more robust to noise and worked better compared to using an imperfect parametric distortion model. In the presence of noise, the mean relative errors in iodine concentration estimation were 11.82% (ANN distortion model) and 16.72% (parametric model). With distortion correction, the mean relative error in iodine concentration estimation was improved by 50% over direct decomposition from distorted data. With our joint bilateral filtration, the resulting material image quality and iodine detectability as defined by the contrast-to-noise ratio were greatly enhanced allowing iodine concentrations as low as 2 mg/ml to be detected. Future work will be dedicated to experimental evaluation of our ANN-based methods using 3D-printed phantoms.
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
Dispersive approach to two-photon exchange in elastic electron-proton scattering
Blunden, P. G.; Melnitchouk, W.
2017-06-14
We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.
Uribe-Patarroyo, Néstor; Bouma, Brett E.
2015-01-01
We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040
Extracting the σ-term from low-energy pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Ruiz de Elvira, Jacobo; Hoferichter, Martin; Kubis, Bastian; Meißner, Ulf-G.
2018-02-01
We present an extraction of the pion-nucleon (π N) scattering lengths from low-energy π N scattering, by fitting a representation based on Roy-Steiner equations to the low-energy data base. We show that the resulting values confirm the scattering-length determination from pionic atoms, and discuss the stability of the fit results regarding electromagnetic corrections and experimental normalization uncertainties in detail. Our results provide further evidence for a large π N σ-term, {σ }π N=58(5) {{MeV}}, in agreement with, albeit less precise than, the determination from pionic atoms.
Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S
2006-04-21
In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.
Evaluation of a scattering correction method for high energy tomography
NASA Astrophysics Data System (ADS)
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.
NASA Astrophysics Data System (ADS)
Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie
2017-05-01
Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.
D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis
2017-08-01
Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.
Total cross sections for electron scattering by 1-propanol at impact energies in the range 40-500 eV
NASA Astrophysics Data System (ADS)
da Silva, D. G. M.; Gomes, M.; Ghosh, S.; Silva, I. F. L.; Pires, W. A. D.; Jones, D. B.; Blanco, F.; Garcia, G.; Buckman, S. J.; Brunger, M. J.; Lopes, M. C. A.
2017-11-01
Absolute total cross section (TCS) measurements for electron scattering from 1-propanol molecules are reported for impact energies from 40 to 500 eV. These measurements were obtained using a new apparatus developed at Juiz de Fora Federal University—Brazil, which is based on the measurement of the attenuation of a collimated electron beam through a gas cell containing the molecules to be studied at a given pressure. Besides these experimental measurements, we have also calculated TCS using the Independent-Atom Model with Screening Corrected Additivity Rule and Interference (IAM-SCAR+I) approach with the level of agreement between them being typically found to be very good.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Variability of adjacency effects in sky reflectance measurements.
Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M
2017-09-01
Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.
Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault
Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.
2004-01-01
Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.
Electroweak radiative corrections to neutrino scattering at NuTeV
NASA Astrophysics Data System (ADS)
Park, Kwangwoo; Baur, Ulrich; Wackeroth, Doreen
2007-04-01
The W boson mass extracted by the NuTeV collaboration from the ratios of neutral and charged-current neutrino and anti-neutrino cross sections differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3 σ. Several possible sources for the observed difference have been discussed in the literature, including new physics beyond the Standard Model (SM). However, in order to be able to pin down the cause of this discrepancy and to interpret this result as a deviation to the SM, it is important to include the complete electroweak one-loop corrections when extracting the W boson mass from neutrino scattering cross sections. We will present results of a Monte Carlo program for νN (νN) scattering including the complete electroweak O(α) corrections, which will be used to study the effects of these corrections on the extracted values for the electroweak parameters. We will briefly introduce some of the newly developed computational tools for generating Feynman diagrams and corresponding analytic expressions for one-loop matrix elements.
Stereo-tomography in triangulated models
NASA Astrophysics Data System (ADS)
Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai
2018-04-01
Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, X; Zhang, Z; Xie, Y
Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less
Quark cluster model for deep-inelastic lepton-deuteron scattering
NASA Astrophysics Data System (ADS)
Yen, G.; Vary, J. P.; Harindranath, A.; Pirner, H. J.
1990-10-01
We evaluate the contribution of quasifree nucleon knockout and of inelastic lepton-nucleon scattering in inclusive electron-deuteron reactions at large momentum transfer. We examine the degree of quantitative agreement with deuteron wave functions from the Reid soft-core and Bonn realistic nucleon-nucleon interactions. For the range of data available there is strong sensitivity to the tensor correlations which are distinctively different in these two deuteron models. At this stage of the analyses the Reid soft-core wave function provides a reasonable description of the data while the Bonn wave function does not. We then include a six-quark cluster component whose relative contribution is based on an overlap criterion and obtain a good description of all the data with both interactions. The critical separation at which overlap occurs (formation of six-quark clusters) is taken to be 1.0 fm and the six-quark cluster probability is 4.7% for Reid and 5.4% for Bonn. As a consequence the quark cluster model with either Reid or Bonn wave function describe the SLAC inclusive electron-deuteron scattering data equally well. We then show how additional data would be decisive in resolving which model is ultimately more correct.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burger, D.E.
1979-11-01
The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less
Probing Supersymmetry with Neutral Current Scattering Experiments
NASA Astrophysics Data System (ADS)
Kurylov, A.; Ramsey-Musolf, M. J.; Su, S.
2004-02-01
We compute the supersymmetric contributions to the weak charges of the electron (QWe) and proton (QWp) in the framework of Minimal Supersymmetric Standard Model. We also consider the ratio of neutral current to charged current cross sections, R v and Rv¯ at v (v¯)-nucleus deep inelastic scattering, and compare the supersymmetric corrections with the deviations of these quantities from the Standard Model predictions implied by the recent NuTeV measurement.
NASA Astrophysics Data System (ADS)
Bezur, L.; Marshall, J.; Ottaway, J. M.
A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.
Heat-Flux Measurements in Laser-Produced Plasmas Using Thomson Scattering from Electron Plasma Waves
NASA Astrophysics Data System (ADS)
Henchen, R. J.; Goncharov, V. N.; Cao, D.; Katz, J.; Froula, D. H.; Rozmus, W.
2017-10-01
An experiment was designed to measure heat flux in coronal plasmas using collective Thomson scattering. Adjustments to the electron distribution function resulting from heat flux affect the shape of the collective Thomson scattering features through wave-particle resonance. The amplitude of the Spitzer-Härm electron distribution function correction term (f1) was varied to match the data and determines the value of the heat flux. Independent measurements of temperature and density obtained from Thomson scattering were used to infer the classical heat flux (q = - κ∇Te) . Time-resolved Thomson-scattering data were obtained at five locations in the corona along the target normal in a blowoff plasma formed from a planar Al target with 1.5 kJ of 351-nm laser light in a 2-ns square pulse. The flux measured through the Thomson-scattering spectra is a factor of 5 less than the κ∇Te measurements. The lack of collisions of heat-carrying electrons suggests a nonlocal model is needed to accurately describe the heat flux. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
NASA Astrophysics Data System (ADS)
Keen, David A.; Keeble, Dean S.; Bennett, Thomas D.
2018-04-01
The structure of fully hydrated grossular, or katoite, contains an unusual arrangement of four O-H bonds within each O4 tetrahedra. Neutron and X-ray total scattering from a powdered deuterated sample have been measured to investigate the local arrangement of this O4D4 cluster. The O-D bond length determined directly from the pair distribution function is 0.954 Å, although the Rietveld-refined distance between average O and D positions was slightly smaller. Reverse Monte Carlo refinement of supercell models to the total scattering data show that other than the consequences of this correctly determined O-D bond length, there is little to suggest that the O4D4 structure is locally significantly different from that expected based on the average structure determined solely from Bragg diffraction.
A Q-Band Free-Space Characterization of Carbon Nanotube Composites
Hassan, Ahmed M.; Garboczi, Edward J.
2016-01-01
We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959
Are Planetary Regolith Particles Back Scattering? Response to a Paper by M. Mishchenko
NASA Technical Reports Server (NTRS)
Hapke, Bruce
1996-01-01
In a recent paper Mishchenko asserts that soil particles are strongly forward scattering, whereas particles on the surfaces of objects in the solar system have been inferred to be back scattering. Mishchenko suggests that this apparent discrepancy is an artifact caused by using an approximate light scattering model to analyse the data, and that planetary regolith particles are actually strong forward scatterers. The purpose of the present paper is to point out the errors in Mishchenko's paper and to show from both theoretical arguments and experimental data that inhomogencous composite particles which are large compared to the wavelength of visible light, such as rock fragments and agglutinates, can be strongly back scattering and are the fundamental scatterers in media composed of them. Such particles appear to be abundant in planetary regoliths and can account for the back scattering character of the surfaces of many bodies in the solar system. If the range of phase angles covered by a data set is insufficient, serious errors in retrieving the particle scattering properties can result whether an exact or approximate scattering model is used. However, if the data set includes both large and small phase angles, approximate regolith scattering models can correctly retrieve the sign of the particle scattering asymmetry.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.
Retrieval of background surface reflectance with BRD components from pre-running BRDF
NASA Astrophysics Data System (ADS)
Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo
2016-10-01
Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.
The effect of precipitation on measuring sea surface salinity from space
NASA Astrophysics Data System (ADS)
Jin, Xuchen; Pan, Delu; He, Xianqiang; Wang, Difeng; Zhu, Qiankun; Gong, Fang
2017-10-01
The sea surface salinity (SSS) can be measured from space by using L-band (1.4 GHz) microwave radiometers. The L-band has been chosen for its sensitivity of brightness temperature to the change of salinity. However, SSS remote sensing is still challenging due to the low sensitivity of brightness temperature to SSS variation: for the vertical polarization, the sensitivity is about 0.4 to 0.8 K/psu with different incident angles and sea surface temperature; for horizontal polarization, the sensitivity is about 0.2 to 0.6 K/psu. It means that we have to make radiometric measurements with accuracy better than 1K even for the best sensitivity of brightness temperature to SSS. Therefore, in order to retrieve SSS, the measured brightness temperature at the top of atmosphere (TOA) needs to be corrected for many sources of error. One main geophysical source of error comes from atmosphere. Currently, the atmospheric effect at L-band is usually corrected by absorption and emission model, which estimate the radiation absorbed and emitted by atmosphere. However, the radiation scattered by precipitation is neglected in absorption and emission models, which might be significant under heavy precipitation. In this paper, a vector radiative transfer model for coupled atmosphere and ocean systems with a rough surface is developed to simulate the brightness temperature at the TOA under different precipitations. The model is based on the adding-doubling method, which includes oceanic emission and reflection, atmospheric absorption and scattering. For the ocean system with a rough surface, an empirical emission model established by Gabarro and the isotropic Cox-Munk wave model considering shadowing effect are used to simulate the emission and reflection of sea surface. For the atmospheric attenuation, it is divided into two parts: For the rain layer, a Marshall-Palmer distribution is used and the scattering properties of the hydrometeors are calculated by Mie theory (the scattering hydrometeors are assumed to be spherical). For the other atmosphere layers, which are assumed to be clear sky, Liebe's millimeter wave propagation model (MPM93) is used to calculate the absorption coefficients of oxygen, water vapor, and cloud droplets. To simulate the change of brightness temperature caused by different rain rate (0-50 mm/h), we assume a 26-layer precipitation structure corresponding to NCEP FNL data. Our radiative transfer simulations showed that the brightness temperature at TOA can be influenced significantly by the heavy precipitation, the results indicate that the atmospheric attenuation of L-band at incidence angle of 42.5° should be a positive bias, and when rain rate rise up to 50 mm/h, the brightness temperature increases are close to 0.6 K and 0.8 K for horizontally and vertically polarized brightness temperature, respectively. Thus, in the case of heavy precipitation, the current absorption and emission model is not accurate enough to correct atmospheric effect, and a radiative transfer model which considers the effect of radiation scattering should be used.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
HMI Data Corrected for Stray Light Now Available
NASA Astrophysics Data System (ADS)
Norton, A. A.; Duvall, T. L.; Schou, J.; Cheung, M. C. M.; Scherrer, P. H.
2016-10-01
The form of the point spread function (PSF) derived for HMI is an Airy function convolved with a Lorentzian. The parameters are bound by observational ground-based testing of the instrument conducted prior to launch (Wachter et al., 2012), by full-disk data used to evaluate the off-limb behavior of the scattered light, as well as by data obtained during the Venus transit. The PSF correction has been programmed in both C and cuda C and runs within the JSOC environment using either a CPU or GPU. A single full-disk intensity image can be deconvolved in less than one second. The PSF is described in more detail in Couvidat et al. (2016) and has already been used by Hathaway et al. (2015) to forward-model solar-convection spectra, by Krucker et al. (2015) to investigate footpoints of off-limb solar flares and by Whitney, Criscuoli and Norton (2016) to examine the relations between intensity contrast and magnetic field strengths. In this presentation, we highlight the changes to umbral darkness, granulation contrast and plage field strengths that result from stray light correction. A twenty-four hour period of scattered-light corrected HMI data from 2010.08.03, including the isolated sunspot NOAA 11092, is currently available for anyone. Requests for additional time periods of interest are welcome and will be processed by the HMI team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn
Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less
Positron scattering from pyridine
NASA Astrophysics Data System (ADS)
Stevens, D.; Babij, T. J.; Machacek, J. R.; Buckman, S. J.; Brunger, M. J.; White, R. D.; García, G.; Blanco, F.; Ellis-Gibbings, L.; Sullivan, J. P.
2018-04-01
We present a range of cross section measurements for the low-energy scattering of positrons from pyridine, for incident positron energies of less than 20 eV, as well as the independent atom model with the screening corrected additivity rule including interference effects calculation, of positron scattering from pyridine, with dipole rotational excitations accounted for using the Born approximation. Comparisons are made between the experimental measurements and theoretical calculations. For the positronium formation cross section, we also compare with results from a recent empirical model. In general, quite good agreement is seen between the calculations and measurements although some discrepancies remain which may require further investigation. It is hoped that the present study will stimulate development of ab initio level theoretical methods to be applied to this important scattering system.
Using phase for radar scatterer classification
NASA Astrophysics Data System (ADS)
Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.
2017-04-01
Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason E.
2003-11-01
A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
Further Examination of a Simplified Model for Positronium-Helium Scattering
NASA Technical Reports Server (NTRS)
DiRienzi, J.; Drachman, Richard J.
2012-01-01
While carrying out investigations on Ps-He scattering we realized that it would be possible to improve the results of a previous work on zero-energy scattering of ortho-positronium by helium atoms. The previous work used a model to account for exchange and also attempted to include the effect of short-range Coulomb interactions in the close-coupling approximation. The 3 terms that were then included did not produce a well-converged result but served to give some justification to the model. Now we improve the calculation by using a simple variational wave function, and derive a much better value of the scattering length. The new result is compared with other computed values, and when an approximate correction due to the van der Waals potential is included the total is consistent with an earlier conjecture.
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System
NASA Astrophysics Data System (ADS)
Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.
2018-02-01
We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.
Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen
2014-01-01
Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.
Experimental testing of scattering polarization models
NASA Astrophysics Data System (ADS)
Li, Wenxian; Casini, Roberto; Tomczyk, Steven; Landi Degl'Innocenti, Egidio; Marsell, Brandan
2018-06-01
We realized a laboratory experiment to study the polarization of the Na I doublet at 589.3 nm, in the presence of a magnetic field. The purpose of the experiment is to test the theory of scattering polarization for illumination conditions typical of astrophysical plasmas. This work was stimulated by solar observations of the Na I doublet that have proven particularly challenging to reproduce with current models of polarized line formation, even casting doubts on our very understanding of the physics of scattering polarization on the Sun. The experiment has confirmed the fundamental correctness of the current theory, and demonstrated that the "enigmatic'' polarization of those observations is exclusively of solar origin.
PARTICLE SCATTERING OFF OF RIGHT-HANDED DISPERSIVE WAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, C.; Kilian, P.; Spanier, F., E-mail: cschreiner@astro.uni-wuerzburg.de
Resonant scattering of fast particles off low frequency plasma waves is a major process determining transport characteristics of energetic particles in the heliosphere and contributing to their acceleration. Usually, only Alfvén waves are considered for this process, although dispersive waves are also present throughout the heliosphere. We investigate resonant interaction of energetic electrons with dispersive, right-handed waves. For the interaction of particles and a single wave a variable transformation into the rest frame of the wave can be performed. Here, well-established analytic models derived in the framework of magnetostatic quasi-linear theory can be used as a reference to validate simulationmore » results. However, this approach fails as soon as several dispersive waves are involved. Based on analytic solutions modeling the scattering amplitude in the magnetostatic limit, we present an approach to modify these equations for use in the plasma frame. Thereby we aim at a description of particle scattering in the presence of several waves. A particle-in-cell code is employed to study wave–particle scattering on a micro-physically correct level and to test the modified model equations. We investigate the interactions of electrons at different energies (from 1 keV to 1 MeV) and right-handed waves with various amplitudes. Differences between model and simulation arise in the case of high amplitudes or several waves. Analyzing the trajectories of single particles we find no microscopic diffusion in the case of a single plasma wave, although a broadening of the particle distribution can be observed.« less
Low dose scatter correction for digital chest tomosynthesis
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2015-03-01
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
Simulation of hole-mobility in doped relaxed and strained Ge layers
NASA Astrophysics Data System (ADS)
Watling, Jeremy R.; Riddet, Craig; Chan, Morgan Kah H.; Asenov, Asen
2010-11-01
As silicon based metal-oxide-semiconductor field-effect transistors (MOSFETs) are reaching the limits of their performance with scaling, alternative channel materials are being considered to maintain performance in future complementary metal-oxide semiconductor technology generations. Thus there is renewed interest in employing Ge as a channel material in p-MOSFETs, due to the significant improvement in hole mobility as compared to Si. Here we employ full-band Monte Carlo to study hole transport properties in Ge. We present mobility and velocity-field characteristics for different transport directions in p-doped relaxed and strained Ge layers. The simulations are based on a method for over-coming the potentially large dynamic range of scattering rates, which results from the long-range nature of the unscreened Coulombic interaction. Our model for ionized impurity scattering includes the affects of dynamic Lindhard screening, coupled with phase-shift, and multi-ion corrections along with plasmon scattering. We show that all these effects play a role in determining the hole carrier transport in doped Ge layers and cannot be neglected.
Field theoretic approach to roughness corrections
NASA Astrophysics Data System (ADS)
Wu, Hua Yao; Schaden, Martin
2012-02-01
We develop a systematic field theoretic description of roughness corrections to the Casimir free energy of a massless scalar field in the presence of parallel plates with mean separation a. Roughness is modeled by specifying a generating functional for correlation functions of the height profile. The two-point correlation function being characterized by its variance, σ2, and correlation length, ℓ. We obtain the partition function of a massless scalar quantum field interacting with the height profile of the surface via a δ-function potential. The partition function is given by a holographic reduction of this model to three coupled scalar fields on a two-dimensional plane. The original three-dimensional space with a flat parallel plate at a distance a from the rough plate is encoded in the nonlocal propagators of the surface fields on its boundary. Feynman rules for this equivalent 2+1-dimensional model are derived and its counterterms constructed. The two-loop contribution to the free energy of this model gives the leading roughness correction. The effective separation, aeff, to a rough plate is measured to a plane that is displaced a distance ρ∝σ2/ℓ from the mean of its profile. This definition of the separation eliminates corrections to the free energy of order 1/a4 and results in unitary scattering matrices. We obtain an effective low-energy model in the limit ℓ≪a. It determines the scattering matrix and equivalent planar scattering surface of a very rough plate in terms of the single length scale ρ. The Casimir force on a rough plate is found to always weaken with decreasing correlation length ℓ. The two-loop approximation to the free energy interpolates between the free energy of the effective low-energy model and that of the proximity force approximation - the force on a very rough plate with σ≳0.5ℓ being weaker than on a planar Dirichlet surface at any separation.
NASA Astrophysics Data System (ADS)
Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.
2017-07-01
Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.
NASA Astrophysics Data System (ADS)
Manfred, K.; Adler, G. A.; Erdesz, F.; Franchin, A.; Lamb, K. D.; Schwarz, J. P.; Wagner, N.; Washenfelder, R. A.; Womack, C.; Murphy, D. M.
2017-12-01
Particle morphology has important implications for light scattering and radiative transfer, but can be difficult to measure. Biomass burning and other important aerosol sources can generate a mixture of both spherical and non-spherical particle morphologies, and it is necessary to represent these populations correctly in models. We describe a laser imaging nephelometer that measures the unpolarized scattering phase function of bulk aerosol at 375 and 405 nm using a wide-angle lens and CCD. We deployed this instrument to the Missoula Fire Sciences Laboratory to measure biomass burning aerosol morphology from controlled fires during the recent FIREX intensive laboratory study. Total integrated scattering signal agreed with that determined by a cavity ring-down photoacoustic spectrometer system and a traditional integrating nephelometer within instrument uncertainties. We compared measured scattering phase functions at 405 nm to theoretical models for spherical (Mie) and fractal (Rayleigh-Debye-Gans) particle morphologies based on the size distribution reported by an optical particle counter. We show that particle morphology can vary dramatically for different fuel types, and present results for two representative fires (pine tree vs arid shrub). We find that Mie theory is inadequate to describe the actual behavior of realistic aerosols from biomass burning in some situations. This study demonstrates the capabilities of the laser imaging nephelometer instrument to provide real-time, in situ information about dominant particle morphology that is vital for accurate radiative transfer calculations.
Combined Henyey-Greenstein and Rayleigh phase function.
Liu, Quanhua; Weng, Fuzhong
2006-10-01
The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.
Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-06-30
For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.
Effects of varying soil moisture contents and vegetation canopies on microwave emissions
NASA Technical Reports Server (NTRS)
Burke, H.-H. K.; Schmugge, T. J.
1982-01-01
Results of NASA airborne passive microwave scans of bare and vegetated fields for comparison with ground truth tests are discussed and a model for atmospheric scattering of radiation by vegetation is detailed. On-board radiometers obtained data at 21, 2.8, and 1.67 cm during three passes over each of 46 fields, 28 of which were bare and the others having wheat or alfalfa. Ground-based sampling included moisture in five layers down to 15 cm in addition to soil temperature. The relationships among the brightness temperature and soil moisture, as well as the surface roughness and the vegetation canopy were examined. A model was developed for the dielectric coefficient and volume scattering for a vegetation medium. L- to C-band data were found useful for retrieving soil information directly. A surface moisture content of 5-35% yielded an emissivity of 0.9-0.7. The data agreed well with a combined multilayer radiative transfer model with simple roughness correction.
Reversal of photon-scattering errors in atomic qubits.
Akerman, N; Kotler, S; Glickman, Y; Ozeri, R
2012-09-07
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Maksud, P.; Moore, S. C.
2003-06-01
Compton scatter, lead X-rays, and high-energy contamination are major factors affecting image quality in Ga-67 imaging. Scattered photons detected in one photopeak window include photons exiting the patient at energies within the photopeak, as well as higher energy photons which have interacted in the collimator and crystal and lost energy. Furthermore, lead X-rays can be detected in the main energy photopeak (93 keV). We have previously developed two energy-based methods, based on artificial neural networks (ANN) and on a generalized spectral (GS) approach to compensate for scatter, high-energy contamination, and lead X-rays in Ga-67 imaging. For comparison, we considered also the projections that would be acquired in the clinic using the optimal energy windows (WIN) we have reported previously for tumor detection and estimation tasks for the 93, 185, and 300 keV photopeaks. The aim of the present study is to evaluate under realistic conditions the impact of these phenomena and their compensation on tumor detection and estimation tasks in Ga-67 imaging. ANN and GS were compared on the basis of performance of a three-channel Hotelling observer (CHO), in detecting the presence of a spherical tumor of unknown size embedded in an anatomic background as well as on the basis of estimation of tumor activity. Projection datasets of spherical tumors ranging from 2 to 6 cm in diameter, located at several sites in an anthropomorphic torso phantom, were simulated using a Monte Carlo program that modeled all photon interactions in the patient as well as in the collimator and the detector for all decays between 91 and 888 keV. One hundred realistic noise realizations were generated from each very-low-noise simulated projection dataset. The presence of scatter degraded both CHO signal-to-noise ratio (SNR) and estimation accuracy. On average, the presence of scatter led to a 12% reduction in CHO SNR. Correcting for scatter further diminished CHO SNR but to a lesser extent with ANN (5% reduction) than with GS (12%). Both scatter corrections improved performance in activity estimation. ANN yielded better precision (1.8% relative standard deviation) than did GS (4%) but greater average bias (5.1% with ANN, 3.6% with GS).
Scatter correction using a primary modulator on a clinical angiography C-arm CT system.
Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca
2017-09-01
Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.
High-fidelity artifact correction for cone-beam CT imaging of the brain
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-02-01
CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.
Some photometric techniques for atmosphereless solar system bodies.
Lumme, K; Peltoniemi, J; Irvine, W M
1990-01-01
We discuss various photometric techniques and their absolute scales in relation to the information that can be derived from the relevant data. We also outline a new scattering model for atmosphereless bodies in the solar system and show how it fits Mariner 10 surface photometry of the planet Mercury. It is shown how important the correct scattering law is while deriving the topography by photoclinometry.
Scattering of Acoustic Waves from Ocean Boundaries
2014-09-30
of buried mines and improve SONAR performance in shallow water. OBJECTIVES 1) Determination of the correct physical model of acoustic propagation...Nicholas Chotiros, particularly for theoretical development of bulk acoustic /sediment modeling and laser roughness measurements. REFERENCES C...PUBLICATIONS 1. M. Isakson, and N. Chotiros. Finite Element Modeling of Acoustic
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2015-08-01
Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size > 1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.
On the Compton scattering redistribution function in plasma
NASA Astrophysics Data System (ADS)
Madej, J.; Różańska, A.; Majczyna, A.; Należyty, M.
2017-08-01
Compton scattering is the dominant opacity source in hot neutron stars, accretion discs around black holes and hot coronae. We collected here a set of numerical expressions of the Compton scattering redistribution functions (RFs) for unpolarized radiation, which are more exact than the widely used Kompaneets equation. The principal aim of this paper is the presentation of the RF by Guilbert, which is corrected for the computational errors in the original paper. This corrected RF was used in the series of papers on model atmosphere computations of hot neutron stars. We have also organized four existing algorithms for the RF computations into a unified form ready to use in radiative transfer and model atmosphere codes. The exact method by Nagirner & Poutanen was numerically compared to all other algorithms in a very wide spectral range from hard X-rays to radio waves. Sample computations of the Compton scattering RFs in thermal plasma were done for temperatures corresponding to the atmospheres of bursting neutron stars and hot intergalactic medium. Our formulae are also useful to study the Compton scattering of unpolarized microwave background radiation in hot intracluster gas and the Sunyaev-Zeldovich effect. We conclude that the formulae by Guilbert and the exact quantum mechanical formulae yield practically the same RFs for gas temperatures relevant to the atmospheres of X-ray bursting neutron stars, T ≤ 108 K.
Schoen, K; Snow, W M; Kaiser, H; Werner, S A
2005-01-01
The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.
Statistical estimation of ultrasonic propagation path parameters for aberration correction.
Waag, Robert C; Astheimer, Jeffrey P
2005-05-01
Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.
Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang
2014-11-01
Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.
García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M
2018-01-01
Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Park, Y; Sharp, G
Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less
Electron kinetic effects on optical diagnostics in fusion plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirnov, V. V.; Den Hartog, D. J.; Duff, J.
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP) and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. We calculate electron thermal corrections to the interferometric phase and polarization state of an EM wave propagating along tangential and poloidal chords (Faraday and Cotton-Mouton polarimetry) and perform analysis of the degree of polarization for incoherent TS. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy themore » high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH and RF current drive effects. The classical problem of degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sup e} measurement relevant to ITER operational scenarios.« less
Improvements in mode-based waveform modeling and application to Eurasian velocity structure
NASA Astrophysics Data System (ADS)
Panning, M. P.; Marone, F.; Kim, A.; Capdeville, Y.; Cupillard, P.; Gung, Y.; Romanowicz, B.
2006-12-01
We introduce several recent improvements to mode-based 3D and asymptotic waveform modeling and examine how to integrate them with numerical approaches for an improved model of upper-mantle structure under eastern Eurasia. The first step in our approach is to create a large-scale starting model including shear anisotropy using Nonlinear Asymptotic Coupling Theory (NACT; Li and Romanowicz, 1995), which models the 2D sensitivity of the waveform to the great-circle path between source and receiver. We have recently improved this approach by implementing new crustal corrections which include a non-linear correction for the difference between the average structure of several large regions from the global model with further linear corrections to account for the local structure along the path between source and receiver (Marone and Romanowicz, 2006; Panning and Romanowicz, 2006). This model is further refined using a 3D implementation of Born scattering (Capdeville, 2005). We have made several recent improvements to this method, in particular introducing the ability to represent perturbations to discontinuities. While the approach treats all sensitivity as linear perturbations to the waveform, we have also experimented with a non-linear modification analogous to that used in the development of NACT. This allows us to treat large accumulated phase delays determined from a path-average approximation non-linearly, while still using the full 3D sensitivity of the Born approximation. Further refinement of shallow regions of the model is obtained using broadband forward finite-difference waveform modeling. We are also integrating a regional Spectral Element Method code into our tomographic modeling, allowing us to move beyond many assumptions inherent in the analytic mode-based approaches, while still taking advantage of their computational efficiency. Illustrations of the effects of these increasingly sophisticated steps will be presented.
Space-based retrieval of NO2 over biomass burning regions: quantifying and reducing uncertainties
NASA Astrophysics Data System (ADS)
Bousserez, N.
2014-10-01
The accuracy of space-based nitrogen dioxide (NO2) retrievals from solar backscatter radiances critically depends on a priori knowledge of the vertical profiles of NO2 and aerosol optical properties. This information is used to calculate an air mass factor (AMF), which accounts for atmospheric scattering and is used to convert the measured line-of-sight "slant" columns into vertical columns. In this study we investigate the impact of biomass burning emissions on the AMF in order to quantify NO2 retrieval errors in the Ozone Monitoring Instrument (OMI) products over these sources. Sensitivity analyses are conducted using the Linearized Discrete Ordinate Radiative Transfer (LIDORT) model. The NO2 and aerosol profiles are obtained from a 3-D chemistry-transport model (GEOS-Chem), which uses the Fire Locating and Monitoring of Burning Emissions (FLAMBE) daily biomass burning emission inventory. Aircraft in situ data collected during two field campaigns, the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) and the Dust and Biomass-burning Experiment (DABEX), are used to evaluate the modeled aerosol optical properties and NO2 profiles over Canadian boreal fires and West African savanna fires, respectively. Over both domains, the effect of biomass burning emissions on the AMF through the modified NO2 shape factor can be as high as -60%. A sensitivity analysis also revealed that the effect of aerosol and shape factor perturbations on the AMF is very sensitive to surface reflectance and clouds. As an illustration, the aerosol correction can range from -20 to +100% for different surface reflectances, while the shape factor correction varies from -70 to -20%. Although previous studies have shown that in clear-sky conditions the effect of aerosols on the AMF was in part implicitly accounted for by the modified cloud parameters, here it is suggested that when clouds are present above a surface layer of scattering aerosols, an explicit aerosol correction would be beneficial to the NO2 retrieval. Finally, a new method that uses slant column information to correct for shape-factor-related AMF error over NOx emission sources is proposed, with possible application to near-real-time OMI retrievals.
Measurement of event shape variables in deep inelastic e p scattering
NASA Astrophysics Data System (ADS)
Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Beck, H. P.; Beck, M.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Botterweck, F.; Boudry, V.; Bourov, S.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Dollfus, C.; Donovan, K. T.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; İşsever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Rick, H.; Reiss, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stößlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vandenplas, D.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wollatz, H.; Wünsch, E.; ŽáČek, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.
1997-02-01
Deep inelastic e p scattering data, taken with the H1 detector at HERA, are used to study the event shape variables thrust, jet broadening and jet mass in the current hemisphere of the Breit frame over a large range of momentum transfers Q between 7 GeV and 100 GeV. The data are compared with results from e+e- experiments. Using second order QCD calculations and an approach to relate hadronisation effects to power corrections an analysis of the Q dependences of the means of the event shape parameters is presented, from which both the power corrections and the strong coupling constant are determined without any assumption on fragmentation models. The power corrections of all event shape variables investigated follow a 1/Q behaviour and can be described by a common parameter α0.
The atmospheric correction algorithm for HY-1B/COCTS
NASA Astrophysics Data System (ADS)
He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun
2008-10-01
China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
NASA Technical Reports Server (NTRS)
Clancy, R. T.; Lee, S. W.
1991-01-01
An analysis of emission-phase-function (EPF) observations from the Viking Orbiter Infrared Thermal Mapper (IRTM) yields a wide variety of results regarding dust and cloud scattering in the Mars atmosphere and atmospheric-corrected albedos for the surface of Mars. A multiple scattering radiative transfer model incorporating a bidirectional phase function for the surface and atmospheric scattering by dust and clouds is used to derive surface albedos and dust and ice optical properties and optical depths for these various conditions on Mars.
A phenomenological π-p scattering length from pionic hydrogen
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Wycech, S.
2004-07-01
We derive a closed, model independent, expression for the electromagnetic correction factor to a phenomenological hadronic scattering length ah extracted from a hydrogenic atom. It is obtained in a non-relativistic approach and in the limit of a short ranged hadronic interaction to terms of order α2logα using an extended charge distribution. A hadronic πN scattering length ahπ-p=0.0870(5)mπ-1 is deduced leading to a πNN coupling constant from the GMO relation gc2/(4π)=14.04(17).
NASA Astrophysics Data System (ADS)
Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Kriek, Mariska; Siana, Brian; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan; Price, Sedona H.; de Groot, Laura; Azadi, Mojegan
2015-12-01
We present results on the star formation rate (SFR) versus stellar mass (M*) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M* (˜109.5-1011.5 M⊙). We find a correlation between log(SFR(Hα)) and log(M*) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find that different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)-log(M*) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))-log(M*) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))-log(M*). Based on comparisons to a simulated SFR-M* relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)- and SFR(UV)-M* relations to constrain the stochasticity of star formation in high-redshift galaxies.
Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.
2017-12-01
We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.
Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu
2006-02-28
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
Energy flow and charged particle spectra in deep inelastic scattering at HERA
NASA Astrophysics Data System (ADS)
Abt, I.; Ahmed, T.; Andreev, V.; Aid, S.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Bärwolff, H.; Bán, J.; Baranov, P.; Barrelet, E.; Bartel, W.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bergstein, H.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Biddulph, P.; Binder, E.; Bizot, J. C.; Blobel, V.; Borras, K.; Bosetti, P. C.; Boudry, V.; Bourdarios, C.; Braemer, A.; Brasse, F.; Braun, U.; Braunschweig, W.; Brisson, V.; Bruncko, D.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Chyla, J.; Clarke, D.; Clegg, A. B.; Colombo, M.; Coughlan, J. A.; Courau, A.; Coutures, Ch.; Cozzika, G.; Criegee, L.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Danilov, M.; Dann, A. W. E.; Dau, W. D.; David, M.; Deffur, E.; Delcourt, B.; Del Buono, L.; Devel, M.; de Roeck, A.; di Nezza, P.; Dingus, P.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Drescher, A.; Duboc, J.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebbinghaus, R.; Eberle, M.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Ehrlichmann, H.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellis, N. N.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Evrard, E.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Fensome, I. F.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Flauger, W.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Fuhrmann, P.; Gabathuler, E.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gebauer, M.; Gellrich, A.; Gennis, M.; Genzel, H.; Gerhards, R.; Godfrey, L.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Goodall, A. M.; Gorelov, I.; Goritchev, P.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Greif, H.; Grindhammer, G.; Gruber, A.; Gruber, C.; Haack, J.; Haidt, D.; Hajduk, L.; Hamon, O.; Hampel, M.; Hanlon, E. M.; Hapke, M.; Harjes, J.; Haydar, R.; Haynes, W. J.; Heatherington, J.; Hedberg, V.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herma, R.; Herynek, I.; Hildesheim, W.; Hill, P.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Höppner, M.; Huet, Ph.; Hufnagel, H.; Huot, N.; Ibbotson, M.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Jansen, T.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kazarian, S.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Kaufmann, H. H.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Köhler, T.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krasny, M. W.; Krücker, D.; Krüger, U.; Kubenka, J. P.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Lacour, D.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Langkau, R.; Lanius, P.; Laporte, J. F.; Lebedev, A.; Leuschner, A.; Leverenz, C.; Levonian, S.; Lewin, D.; Ley, Ch.; Lindner, A.; Lindström, G.; Linsel, F.; Lipinski, J.; Loch, P.; Lohmander, H.; Lopez, G. C.; Lüers, D.; Lüke, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masson, S.; Mavroidis, A.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Mikocki, S.; Monnier, E.; Moreau, F.; Moreels, J.; Morris, J. V.; Müller, K.; Murín, P.; Murray, S. A.; Nagovizin, V.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Niebergall, F.; Niebuhr, C.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Orenstein, S.; Ould-Saada, F.; Pascaud, C.; Patel, G. D.; Peppel, E.; Peters, S.; Phillips, H. T.; Phillips, J. P.; Pichler, Ch.; Pilgram, W.; Pitzl, D.; Prell, S.; Prosi, R.; Rädel, G.; Raupach, F.; Rauschnabel, K.; Reimer, P.; Reinshagen, S.; Ribarics, P.; Riech, V.; Riedlberger, J.; Riess, S.; Rietz, M.; Robertson, S. M.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Royon, C.; Rudowicz, M.; Ruffer, M.; Rusakov, S.; Rybicki, K.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Savitsky, M.; Schacht, P.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmitz, W.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schulz, M.; Schwab, B.; Schwind, A.; Scobel, W.; Seehausen, U.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Soloviev, Y.; Spitzer, H.; Steenbock, M.; Steffen, P.; Steinberg, R.; Stella, B.; Stephens, K.; Stier, J.; Stösslein, U.; Strachota, J.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Taylor, R. E.; Tchernyshov, V.; Thiebaux, C.; Thompson, G.; Tichomirov, I.; Truöl, P.; Turnau, J.; Tutas, J.; Urban, L.; Usik, A.; Valkar, S.; Valkarova, A.; Vallée, C.; van Esch, P.; Vartapetian, A.; Vazdik, Y.; Vecko, M.; Verrecchia, P.; Vick, R.; Villet, G.; Vogel, E.; Wacker, K.; Walker, I. W.; Walther, A.; Weber, G.; Wegener, D.; Wegener, A.; Wellisch, H. P.; West, L. R.; Willard, S.; Winde, M.; Winter, G.-G.; Wolff, Th.; Womersley, L. A.; Wright, A. E.; Wulff, N.; Yiou, T. P.; Žáček, J.; Zeitnitz, C.; Ziaeepour, H.; Zimmer, M.; Zimmermann, W.; Zomer, F.
1994-09-01
Global properties of the hadronic final state in deep inelastic scattering events at HERA are investigated. The data are corrected for detector effects and are compared directly with QCD phenomenology. Energy flows in both the laboratory frame and the hadronic centre of mass system and energy-energy correlations in the laboratory frame are presented. Comparing various QCD models, the colour dipole model provides the only satisfactory description of the data. In the hadronic centre of mass system the momentum components of charged particles longitudinal and transverse to the virtual boson direction are measured and compared with lower energy lepton-nucleon scattering data as well as with e + e - dat from LEP.
Neutron crosstalk between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.
2015-05-01
We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less
NASA Technical Reports Server (NTRS)
Gould, R. J.
1979-01-01
Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.
Bio-Optics of the Chesapeake Bay from Measurements and Radiative Transfer Calculations
NASA Technical Reports Server (NTRS)
Tzortziou, Maria; Herman, Jay R.; Gallegos, Charles L.; Neale, Patrick J.; Subramaniam, Ajit; Harding, Lawrence W., Jr.; Ahmad, Ziauddin
2005-01-01
We combined detailed bio-optical measurements and radiative transfer (RT) modeling to perform an optical closure experiment for optically complex and biologically productive Chesapeake Bay waters. We used this experiment to evaluate certain assumptions commonly used when modeling bio-optical processes, and to investigate the relative importance of several optical characteristics needed to accurately model and interpret remote sensing ocean-color observations in these Case 2 waters. Direct measurements were made of the magnitude, variability, and spectral characteristics of backscattering and absorption that are critical for accurate parameterizations in satellite bio-optical algorithms and underwater RT simulations. We found that the ratio of backscattering to total scattering in the mid-mesohaline Chesapeake Bay varied considerably depending on particulate loading, distance from land, and mixing processes, and had an average value of 0.0128 at 530 nm. Incorporating information on the magnitude, variability, and spectral characteristics of particulate backscattering into the RT model, rather than using a volume scattering function commonly assumed for turbid waters, was critical to obtaining agreement between RT calculations and measured radiometric quantities. In situ measurements of absorption coefficients need to be corrected for systematic overestimation due to scattering errors, and this correction commonly employs the assumption that absorption by particulate matter at near infrared wavelengths is zero.
Subleading Regge limit from a soft anomalous dimension
NASA Astrophysics Data System (ADS)
Brüser, Robin; Caron-Huot, Simon; Henn, Johannes M.
2018-04-01
Wilson lines capture important features of scattering amplitudes, for example soft effects relevant for infrared divergences, and the Regge limit. Beyond the leading power approximation, corrections to the eikonal picture have to be taken into account. In this paper, we study such corrections in a model of massive scattering amplitudes in N=4 super Yang-Mills, in the planar limit, where the mass is generated through a Higgs mechanism. Using known three-loop analytic expressions for the scattering amplitude, we find that the first power suppressed term has a very simple form, equal to a single power law. We propose that its exponent is governed by the anomalous dimension of a Wilson loop with a scalar inserted at the cusp, and we provide perturbative evidence for this proposal. We also analyze other limits of the amplitude and conjecture an exact formula for a total cross-section at high energies.
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo
2004-12-01
123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p < 0.05 was considered significant. The percentage of scatter and cross-talk relative to the corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p < 0.001). Scatter and cross-talk correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one-day acquisition of 123I-BMIPP and 99mTc-tetrofosmin SPECT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Romarly F. da; Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-580 Santo André, São Paulo; Oliveira, Eliane M. de
2015-03-14
We report theoretical and experimental total cross sections for electron scattering by phenol (C{sub 6}H{sub 5}OH). The experimental data were obtained with an apparatus based in Madrid and the calculated cross sections with two different methodologies, the independent atom method with screening corrected additivity rule (IAM-SCAR), and the Schwinger multichannel method with pseudopotentials (SMCPP). The SMCPP method in the N{sub open}-channel coupling scheme, at the static-exchange-plus-polarization approximation, is employed to calculate the scattering amplitudes at impact energies ranging from 5.0 eV to 50 eV. We discuss the multichannel coupling effects in the calculated cross sections, in particular how the numbermore » of excited states included in the open-channel space impacts upon the convergence of the elastic cross sections at higher collision energies. The IAM-SCAR approach was also used to obtain the elastic differential cross sections (DCSs) and for correcting the experimental total cross sections for the so-called forward angle scattering effect. We found a very good agreement between our SMCPP theoretical differential, integral, and momentum transfer cross sections and experimental data for benzene (a molecule differing from phenol by replacing a hydrogen atom in benzene with a hydroxyl group). Although some discrepancies were found for lower energies, the agreement between the SMCPP data and the DCSs obtained with the IAM-SCAR method improves, as expected, as the impact energy increases. We also have a good agreement among the present SMCPP calculated total cross section (which includes elastic, 32 inelastic electronic excitation processes and ionization contributions, the latter estimated with the binary-encounter-Bethe model), the IAM-SCAR total cross section, and the experimental data when the latter is corrected for the forward angle scattering effect [Fuss et al., Phys. Rev. A 88, 042702 (2013)].« less
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data
NASA Technical Reports Server (NTRS)
Frouin, Robert; Deschamps, Pierre-Yves
1997-01-01
Firstly, we have analyzed atmospheric transmittance and sky radiance data connected at the Scripps Institution of Oceanography pier, La Jolla during the winters of 1993 and 1994. Aerosol optical thickness at 870 nm was generally low in La Jolla, with most values below 0.1 after correction for stratospheric aerosols. For such low optical thickness, variability in aerosol scattering properties cannot be determined, and a mean background model, specified regionally under stable stratospheric component, may be sufficient for ocean color remote sensing, from space. For optical thicknesses above 0. 1, two modes of variability characterized by Angstrom exponents of 1.2 and 0.5 and corresponding, to Tropospheric and Maritime models, respectively, were identified in the measurements. The aerosol models selected for ocean color remote sensing, allowed one to fit, within measurement inaccuracies, the derived values of Angstrom exponent and 'pseudo' phase function (the product of single scattering albedo and phase function), key atmospheric correction parameters. Importantly, the 'pseudo' phase function can be derived from measurements of the Angstrom exponent. Shipborne sun photometer measurements at the time of satellite overpass are usually sufficient to verify atmospheric correction for ocean color.
Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria
2010-11-07
Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.
Qualitative and quantitative processing of side-scan sonar data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwan, F.S.; Anderson, A.L.; Hilde, T.W.C.
1990-06-01
Modern side-scan sonar systems allow vast areas of seafloor to be rapidly imaged and quantitatively mapped in detail. The application of remote sensing image processing techniques can be used to correct for various distortions inherent in raw sonography. Corrections are possible for water column, slant-range, aspect ratio, speckle and striping noise, multiple returns, power drop-off, and for georeferencing. The final products reveal seafloor features and patterns that are geometrically correct, georeferenced, and have improved signal/noise ratio. These products can be merged with other georeferenced data bases for further database management and information extraction. In order to compare data collected bymore » different systems from a common area and to ground truth measurements and geoacoustic models, quantitative correction must be made for calibrated sonar system and bathymetry effects. Such data inversion must account for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area, and grazing angle effects. Seafloor classification can then be performed on the calculated back-scattering strength using Lambert's Law and regression analysis. Examples are given using both approaches: image analysis and inversion of data based on the sonar equation.« less
NASA Astrophysics Data System (ADS)
Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.
2017-02-01
The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.
NASA Astrophysics Data System (ADS)
Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem
2010-09-01
In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.
Wang, L L W; Perles, L A; Archambault, L; Sahoo, N; Mirkovic, D; Beddar, S
2013-01-01
The plastic scintillation detectors (PSD) have many advantages over other detectors in small field dosimetry due to its high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs will undergo a quenching effect which makes the signal level reduced significantly when the detector is close to Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene based PSD (BCF-12, ϕ0.5mm×4mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately. PMID:23128412
NASA Astrophysics Data System (ADS)
Wang, L. L. W.; Perles, L. A.; Archambault, L.; Sahoo, N.; Mirkovic, D.; Beddar, S.
2012-12-01
Plastic scintillation detectors (PSDs) have many advantages over other detectors in small field dosimetry due to their high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs undergo a quenching effect which makes the signal level reduced significantly when the detector is close to the Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene-based PSD (BCF-12, ϕ0.5 mm × 4 mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between the QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately.
Old, L.; Wojtak, R.; Mamon, G. A.; ...
2015-03-26
Our paper is the second in a series in which we perform an extensive comparison of various galaxy-based cluster mass estimation techniques that utilize the positions, velocities and colours of galaxies. Our aim is to quantify the scatter, systematic bias and completeness of cluster masses derived from a diverse set of 25 galaxy-based methods using two contrasting mock galaxy catalogues based on a sophisticated halo occupation model and a semi-analytic model. Analysing 968 clusters, we find a wide range in the rms errors in log M200c delivered by the different methods (0.18–1.08 dex, i.e. a factor of ~1.5–12), with abundance-matchingmore » and richness methods providing the best results, irrespective of the input model assumptions. In addition, certain methods produce a significant number of catastrophic cases where the mass is under- or overestimated by a factor greater than 10. Given the steeply falling high-mass end of the cluster mass function, we recommend that richness- or abundance-matching-based methods are used in conjunction with these methods as a sanity check for studies selecting high-mass clusters. We also see a stronger correlation of the recovered to input number of galaxies for both catalogues in comparison with the group/cluster mass, however, this does not guarantee that the correct member galaxies are being selected. Finally, we did not observe significantly higher scatter for either mock galaxy catalogues. These results have implications for cosmological analyses that utilize the masses, richnesses, or abundances of clusters, which have different uncertainties when different methods are used.« less
Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4
2013-11-15
Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less
Measurement of hadronic azimuthal distributions in deep inelastic muon proton scattering
NASA Astrophysics Data System (ADS)
Aubert, J. J.; Bassompierre, G.; Becks, K. H.; Benchouk, C.; Best, C.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Broll, C.; Brown, S.; Carr, J.; Clifft, R. W.; Cobb, J. H.; Coignet, G.; Combley, F.; Court, G. R.; D'Agostini, G.; Dau, W. D.; Davies, J. K.; Déclais, Y.; Dobinson, R. W.; Dosselli, U.; Drees, J.; Edwards, A.; Edwards, M.; Favier, J.; Ferrero, M. I.; Flauger, W.; Forsbach, H.; Gabathuler, E.; Gamet, R.; Gayler, J.; Gerhardt, V.; Gössling, C.; Gregory, P.; Haas, J.; Hamacher, K.; Hayman, P.; Henckes, M.; Korbel, V.; Landgraf, U.; Leenen, M.; Maire, M.; Minssieux, H.; Mohr, W.; Montgomery, H. E.; Moser, K.; Mount, R. P.; Nagy, E.; Nassalski, J.; Norton, P. R.; McNicholas, J.; Osborne, A. M.; Pavel, N.; Payre, P.; Peroni, C.; Pessard, H.; Pietrzyk, U.; Rith, K.; Schneegans, M.; Schneider, A.; Sloan, T.; Stier, H. E.; Stockhausen, W.; Thénard, J. M.; Thompson, J. C.; Urban, L.; Villers, M.; Wahlen, H.; Whalley, M.; Williams, D.; Williams, W. S. C.; Williamson, J.; Wimpenny, S. J.; European Muon Collaboration
1983-10-01
Results on moments of the azimuthal angle ϕ of final state hadrons from 120 GeV and 280 GeV μp scattering are presented. A ϕ asymmetry is observed and its W2, Q2, z and pT dependences compared with model calculations which include intrinsic transverse momentum and first order QCD corrections. These studies indicate that the observed asymmetry is mainly due to intrinsic transverse momentum kT.
Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E
2014-11-01
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.
Radiance and polarization of multiple scattered light from haze and clouds.
Kattawar, G W; Plass, G N
1968-08-01
The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.
Positronium collisions with molecular nitrogen
NASA Astrophysics Data System (ADS)
Wilde, R. S.; Fabrikant, I. I.
2018-05-01
For many atomic and molecular targets positronium (Ps) scattering looks very similar to electron scattering if total scattering cross sections are plotted as functions of the projectile velocity. Recently this similarity was observed for the resonant scattering by the N2 molecule. For correct treatment of Ps-molecule scattering incorporation of the exchange interaction and short-range correlations is of paramount importance. In the present work we have used a free-electron-gas model to describe these interactions in collisions of Ps with the N2 molecule. The results agree reasonably well with the experiment, but the position of the resonance is somewhat shifted towards lower energies, probably due to the fixed-nuclei approximation employed in the calculations. The partial-wave analysis of the resonant peak shows that its composition is more complex than in the case of e -N2 scattering.
Environmental and Genetic Factors Explain Differences in Intraocular Scattering.
Benito, Antonio; Hervella, Lucía; Tabernero, Juan; Pennos, Alexandros; Ginis, Harilaos; Sánchez-Romera, Juan F; Ordoñana, Juan R; Ruiz-Sánchez, Marcos; Marín, José M; Artal, Pablo
2016-01-01
To study the relative impact of genetic and environmental factors on the variability of intraocular scattering within a classical twin study. A total of 64 twin pairs, 32 monozygotic (MZ) (mean age: 54.9 ± 6.3 years) and 32 dizygotic (DZ) (mean age: 56.4 ± 7.0 years), were measured after a complete ophthalmologic exam had been performed to exclude all ocular pathologies that increase intraocular scatter as cataracts. Intraocular scattering was evaluated by using two different techniques based on a straylight parameter log(S) estimation: a compact optical instrument based in the principle of optical integration and a psychophysical measurement. Intraclass correlation coefficients (ICC) were used as descriptive statistics of twin resemblance, and genetic models were fitted to estimate heritability. No statistically significant difference was found for MZ and DZ groups for age (P = 0.203), best-corrected visual acuity (P = 0.626), cataract gradation (P = 0.701), sex (P = 0.941), optical log(S) (P = 0.386), or psychophysical log(S) (P = 0.568), with only a minor difference in equivalent sphere (P = 0.008). Intraclass correlation coefficients between siblings were similar for scatter parameters: 0.676 in MZ and 0.471 in DZ twins for optical log(S); 0.533 in MZ twins and 0.475 in DZ twins for psychophysical log(S). For equivalent sphere, ICCs were 0.767 in MZ and 0.228 in DZ twins. Conservative estimates of heritability for the measured scattering parameters were 0.39 and 0.20, respectively. Correlations of intraocular scatter (straylight) parameters in the groups of identical and nonidentical twins were similar. Heritability estimates were of limited magnitude, suggesting that genetic and environmental factors determine the variance of ocular straylight in healthy middle-aged adults.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang
GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
Analysis of corrections to the eikonal approximation
NASA Astrophysics Data System (ADS)
Hebborn, C.; Capel, P.
2017-11-01
Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.
Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin
2013-01-01
Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. PMID:24156077
Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin
2013-01-01
Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities.
Wang, Diancheng; Pan, Kai; Subedi, Ramesh R.; ...
2013-08-22
We report on parity-violating asymmetries in the nucleon resonance region measured using 5 - 6 GeV longitudinally polarized electrons scattering off an unpolarized deuterium target. These results are the first parity-violating asymmetry data in the resonance region beyond the Δ(1232), and provide a verification of quark-hadron duality in the nucleon electroweak γ Z interference structure functions at the (10-15)% level. The results are of particular interest to models relevant for calculating the γ Z box-diagram corrections to elastic parity-violating electron scattering measurements.
Probing the Interstellar Dust towards the Galactic Centre using X-ray Dust Scattering Halos
NASA Astrophysics Data System (ADS)
Jin, C.; Ponti, G.; Haberl, F.; Smith, R.
2017-10-01
Dust scattering creates an X-ray halo that contains abundant information about the interstellar dust along the source's line-of-sight (LOS), and is most prominent when the LOS nH is high. In this talk, I will present results from our latest study of a bright dust scattering halo around an eclipsing X-ray binary at 1.45 arcmin away from Sgr A*, namely AX J1745.6-2901. This study is based on a large set of XMM-Newton and Chandra observations, and is so-far the best dust scattering halo study of a X-ray transient in the Galactic centre (GC). I will show that the foreground dust of AX J1745.6-2901 can be decomposed into two major thick dust layers. One layer contains (66-81)% of the total LOS dust and is several kpc away from the source, and so is most likely to reside in the Galactic disc. The other layer is local to the source. I will also show that the dust scattering halo can cause the source spectrum to severely depend on the source extraction region. Such spectral bias can be corrected by our new Xspec model, which is likely to be applicable to Sgr A* and other GC sources as well.
NASA Astrophysics Data System (ADS)
Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.
2006-07-01
An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.
An investigation of light transport through scattering bodies with non-scattering regions.
Firbank, M; Arridge, S R; Schweiger, M; Delpy, D T
1996-04-01
Near-infra-red (NIR) spectroscopy is increasingly being used for monitoring cerebral oxygenation and haemodynamics. One current concern is the effect of the clear cerebrospinal fluid upon the distribution of light in the head. There are difficulties in modelling clear layers in scattering systems. The Monte Carlo model should handle clear regions accurately, but is too slow to be used for realistic geometries. The diffusion equation can be solved quickly for realistic geometries, but is only valid in scattering regions. In this paper we describe experiments carried out on a solid slab phantom to investigate the effect of clear regions. The experimental results were compared with the different models of light propagation. We found that the presence of a clear layer had a significant effect upon the light distribution, which was modelled correctly by Monte Carlo techniques, but not by diffusion theory. A novel approach to calculating the light transport was developed, using diffusion theory to analyze the scattering regions combined with a radiosity approach to analyze the propagation through the clear region. Results from this approach were found to agree with both the Monte Carlo and experimental data.
Scattering of charge and spin excitations and equilibration of a one-dimensional Wigner crystal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matveev, K. A.; Andreev, A. V.; Klironomos, A. D.
2014-07-01
We study scattering of charge and spin excitations in a system of interacting electrons in one dimension. At low densities, electrons form a one-dimensional Wigner crystal. To a first approximation, the charge excitations are the phonons in the Wigner crystal, and the spin excitations are described by the Heisenberg model with nearest-neighbor exchange coupling. This model is integrable and thus incapable of describing some important phenomena, such as scattering of excitations off each other and the resulting equilibration of the system. We obtain the leading corrections to this model, including charge-spin coupling and the next-nearest-neighbor exchange in the spin subsystem.more » We apply the results to the problem of equilibration of the one-dimensional Wigner crystal and find that the leading contribution to the equilibration rate arises from scattering of spin excitations off each other. We discuss the implications of our results for the conductance of quantum wires at low electron densities« less
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, G; Feng, Z; Yin, Y
2016-06-15
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator.more » The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less
Extraction of the proton radius from electron-proton scattering data
Lee, Gabriel; Arrington, John R.; Hill, Richard J.
2015-07-27
We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less
Accurate Modeling of Dark-Field Scattering Spectra of Plasmonic Nanostructures.
Jiang, Liyong; Yin, Tingting; Dong, Zhaogang; Liao, Mingyi; Tan, Shawn J; Goh, Xiao Ming; Allioux, David; Hu, Hailong; Li, Xiangyin; Yang, Joel K W; Shen, Zexiang
2015-10-27
Dark-field microscopy is a widely used tool for measuring the optical resonance of plasmonic nanostructures. However, current numerical methods for simulating the dark-field scattering spectra were carried out with plane wave illumination either at normal incidence or at an oblique angle from one direction. In actual experiments, light is focused onto the sample through an annular ring within a range of glancing angles. In this paper, we present a theoretical model capable of accurately simulating the dark-field light source with an annular ring. Simulations correctly reproduce a counterintuitive blue shift in the scattering spectra from gold nanodisks with a diameter beyond 140 nm. We believe that our proposed simulation method can be potentially applied as a general tool capable of simulating the dark-field scattering spectra of plasmonic nanostructures as well as other dielectric nanostructures with sizes beyond the quasi-static limit.
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellan, Antonio; Humair, Florian; Matasci, Battista; Derron, Marc-Henri; Jaboyedoff, Michel
2016-03-01
Ground-based LiDAR has been traditionally used for surveying purposes via 3D point clouds. In addition to XYZ coordinates, an intensity value is also recorded by LiDAR devices. The intensity of the backscattered signal can be a significant source of information for various applications in geosciences. Previous attempts to account for the scattering of the laser signal are usually modelled using a perfect diffuse reflection. Nevertheless, experience on natural outcrops shows that rock surfaces do not behave as perfect diffuse reflectors. The geometry (or relief) of the scanned surfaces plays a major role in the recorded intensity values. Our study proposes a new terrestrial LiDAR intensity correction, which takes into consideration the range, the incidence angle and the geometry of the scanned surfaces. The proposed correction equation combines the classical radar equation for LiDAR with the bidirectional reflectance distribution function of the Oren-Nayar model. It is based on the idea that the surface geometry can be modelled by a relief of multiple micro-facets. This model is constrained by only one tuning parameter: the standard deviation of the slope angle distribution (σslope) of micro-facets. Firstly, a series of tests have been carried out in laboratory conditions on a 2 m2 board covered by black/white matte paper (perfect diffuse reflector) and scanned at different ranges and incidence angles. Secondly, other tests were carried out on rock blocks of different lithologies and surface conditions. Those tests demonstrated that the non-perfect diffuse reflectance of rock surfaces can be practically handled by the proposed correction method. Finally, the intensity correction method was applied to a real case study, with two scans of the carbonate rock outcrop of the Dents-du-Midi (Swiss Alps), to improve the lithological identification for geological mapping purposes. After correction, the intensity values are proportional to the intrinsic material reflectance and are independent from range, incidence angle and scanned surface geometry. The corrected intensity values significantly improve the material differentiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less
NASA Astrophysics Data System (ADS)
Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela
1995-03-01
We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
A polarimetric scattering database for non-spherical ice particles at microwave wavelengths
NASA Astrophysics Data System (ADS)
Lu, Yinghui; Jiang, Zhiyuan; Aydin, Kultegin; Verlinde, Johannes; Clothiaux, Eugene E.; Botta, Giovanni
2016-10-01
The atmospheric science community has entered a period in which electromagnetic scattering properties at microwave frequencies of realistically constructed ice particles are necessary for making progress on a number of fronts. One front includes retrieval of ice-particle properties and signatures from ground-based, airborne, and satellite-based radar and radiometer observations. Another front is evaluation of model microphysics by application of forward operators to their outputs and comparison to observations during case study periods. Yet a third front is data assimilation, where again forward operators are applied to databases of ice-particle scattering properties and the results compared to observations, with their differences leading to corrections of the model state. Over the past decade investigators have developed databases of ice-particle scattering properties at microwave frequencies and made them openly available. Motivated by and complementing these earlier efforts, a database containing polarimetric single-scattering properties of various types of ice particles at millimeter to centimeter wavelengths is presented. While the database presented here contains only single-scattering properties of ice particles in a fixed orientation, ice-particle scattering properties are computed for many different directions of the radiation incident on them. These results are useful for understanding the dependence of ice-particle scattering properties on ice-particle orientation with respect to the incident radiation. For ice particles that are small compared to the wavelength, the number of incident directions of the radiation is sufficient to compute reasonable estimates of their (randomly) orientation-averaged scattering properties. This database is complementary to earlier ones in that it contains complete (polarimetric) scattering property information for each ice particle - 44 plates, 30 columns, 405 branched planar crystals, 660 aggregates, and 640 conical graupel - and direction of incident radiation but is limited to four frequencies (X-, Ku-, Ka-, and W-bands), does not include temperature dependencies of the single-scattering properties, and does not include scattering properties averaged over randomly oriented ice particles. Rules for constructing the morphologies of ice particles from one database to the next often differ; consequently, analyses that incorporate all of the different databases will contain the most variability, while illuminating important differences between them. Publication of this database is in support of future analyses of this nature and comes with the hope that doing so helps contribute to the development of a database standard for ice-particle scattering properties, like the NetCDF (Network Common Data Form) CF (Climate and Forecast) or NetCDF CF/Radial metadata conventions.
EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice
NASA Astrophysics Data System (ADS)
Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.
2016-12-01
The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.
NASA Astrophysics Data System (ADS)
Honeyager, Ryan
High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then examined. The loss of interior structure is found to have a negligible impact on scattering cross sections, and backscatter is lowered by approximately five percent. This establishes that detailed knowledge of interior structure is not necessary when modeling scattering behavior, and it also provides support for using an effective medium approximation to describe the interiors of snow aggregates. The Voronoi diagram-based technique enables the almost trivial determination of the effective density of this medium. A bounding neighbor algorithm is then used to establish a greatly improved approximation of scattering by equivalent spheroids. This algorithm is then used to posit a Voronoi diagram-based definition of effective density approach, which is used in concert with the T-matrix method to determine single-scattering cross sections. The resulting backscatters are found to reasonably match those of the DDA over frequencies from 10.65 to 183.31 GHz and particle sizes from a few hundred micrometers to nine millimeters in length. Integrated error in backscatter versus DDA is found to be within 25% at 94 GHz. Errors in scattering cross-sections and asymmetry parameters are likewise small. The observed cross-sectional errors are much smaller than the differences observed among different particle models. This represents a significant improvement over established techniques, and it demonstrates that the radiative properties of dense aggregate snowflakes may be adequately represented by equal-mass homogeneous spheroids. The present results can be used to supplement retrieval algorithms used by CloudSat, EarthCARE, Galileo, GPM and SWACR radars. The ability to predict the full range of scattering properties is potentially also useful for other particle regimes where a compact particle approximation is applicable.
Climatology analysis of cirrus cloud in ARM site: South Great Plain
NASA Astrophysics Data System (ADS)
Olayinka, K.
2017-12-01
Cirrus cloud play an important role in the atmospheric energy balance and hence in the earth's climate system. The properties of optically thin clouds can be determined from measurements of transmission of the direct solar beam. The accuracy of cloud optical properties determined in this way is compromised by contamination of the direct transmission by light that is scattered into the sensors field of view. With the forward scattering correction method developed by Min et al., (2004), the accuracy of thin cloud retrievals from MFRSR has been improved. Our result shows over 30% of cirrus cloud present in the atmosphere are within optical depth between (1-2). In this study, we do statistics studies on cirrus clouds properties based on multi-years cirrus cloud measurements from MFRSR at ARM site from the South Great Plain (SGP) site due to its relatively easy accessibility, wide variability of climate cloud types and surface flux properties, large seasonal variation in temperature and specific humidity. Through the statistic studies, temporal and spatial variations of cirrus clouds are investigated. Since the presence of cirrus cloud increases the effect of greenhouse gases, we will retrieve the aerosol optical depth in all the cirrus cloud regions using a radiative transfer model for atmospheric correction. Calculate thin clouds optical depth (COD), and aerosol optical depth (AOD) using a radiative transfer model algorithm, e.g.: MODTRAN (MODerate resolution atmospheric TRANsmission)
Re-derived overclosure bound for the inert doublet model
NASA Astrophysics Data System (ADS)
Biondini, S.; Laine, M.
2017-08-01
We apply a formalism accounting for thermal effects (such as modified Sommerfeld effect; Salpeter correction; decohering scatterings; dissociation of bound states), to one of the simplest WIMP-like dark matter models, associated with an "inert" Higgs doublet. A broad temperature range T ˜ M/20 . . . M/104 is considered, stressing the importance and less-understood nature of late annihilation stages. Even though only weak interactions play a role, we find that resummed real and virtual corrections increase the tree-level overclosure bound by 1 . . . 18%, depending on quartic couplings and mass splittings.
NASA Astrophysics Data System (ADS)
Johnson, Jeffrey R.; Grundy, William M.; Lemmon, Mark T.; Bell, James F.; Deen, R. G.
2015-03-01
The Panoramic Camera (Pancam) on the Mars Exploration Rovers Spirit and Opportunity acquired visible/near-infrared (432-1009 nm) multispectral observations of soils and rocks under varying viewing and illumination geometries. Data retrieved from these images were modeled using radiative transfer theory to study the microphysical and surface scattering nature of materials at both sites. Nearly 57,000 individual measurements from 1900 images were collected of rock and soil units identified by their color and morphologic properties over a wide range of phase angles (0-150°). Images were acquired between Sols 500 and 1525 in the Columbia Hills and regions around Home Plate in Gusev Crater and in the plains and craters between Erebus and Victoria Craters in Meridiani Planum. Corrections for diffuse skylight incorporated sky models based on observations of atmospheric opacity throughout the mission. Disparity maps created from Pancam stereo images allowed estimates of local facet orientations. For Spirit, soils at lower elevations near Home Plate were modeled with lower single scattering albedo (w) values than those on the summit of Husband Hill, but otherwise soils exhibited similar scattering properties to previous Gusev soils. Dark ripple sands at the El Dorado dunes were among the most forward-scattering materials modeled. Silica-rich soils and nodules near Home Plate were analyzed for the first time, and exhibited increased forward scattering behavior with increasing wavelength, consistent with microporosity inferred from previous high resolution images and thermal infrared spectroscopy. For Opportunity, the opposition effect width parameter for sandstone outcrop rocks was modeled for the first time, and demonstrated average values consistent with surfaces of intermediate porosity and/or grain size distribution between those modeled for spherule-rich soils and darker, clast-poor soils. Soils outside a wind streak emanating from the northern rim of Victoria Crater exhibited w values ∼16% higher than soils inside the streak. Overall, w values and scattering properties for outcrop rocks, spherule-rich soils, and rover tracks were similar to previous Meridiani Planum analyses, emphasizing the homogeneity of these materials across nearly 12 km of rover odometry.
Limitations on near-surface correction for multicomponent offset VSP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macbeth, C.; Li, X.Y.; Horne, S.
1994-12-31
Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less
Simulating the influence of scatter and beam hardening in dimensional computed tomography
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Carmignato, S.
2017-10-01
Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.
Modeling of high‐frequency seismic‐wave scattering and propagation using radiative transfer theory
Zeng, Yuehua
2017-01-01
This is a study of the nonisotropic scattering process based on radiative transfer theory and its application to the observation of the M 4.3 aftershock recording of the 2008 Wells earthquake sequence in Nevada. Given a wide range of recording distances from 29 to 320 km, the data provide a unique opportunity to discriminate scattering models based on their distance‐dependent behaviors. First, we develop a stable numerical procedure to simulate nonisotropic scattering waves based on the 3D nonisotropic scattering theory proposed by Sato (1995). By applying the simulation method to the inversion of M 4.3 Wells aftershock recordings, we find that a nonisotropic scattering model, dominated by forward scattering, provides the best fit to the observed high‐frequency direct S waves and S‐wave coda velocity envelopes. The scattering process is governed by a Gaussian autocorrelation function, suggesting a Gaussian random heterogeneous structure for the Nevada crust. The model successfully explains the common decay of seismic coda independent of source–station locations as a result of energy leaking from multiple strong forward scattering, instead of backscattering governed by the diffusion solution at large lapse times. The model also explains the pulse‐broadening effect in the high‐frequency direct and early arriving S waves, as other studies have found, and could be very important to applications of high‐frequency wave simulation in which scattering has a strong effect. We also find that regardless of its physical implications, the isotropic scattering model provides the same effective scattering coefficient and intrinsic attenuation estimates as the forward scattering model, suggesting that the isotropic scattering model is still a viable tool for the study of seismic scattering and intrinsic attenuation coefficients in the Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shivaei, Irene; Reddy, Naveen A.; Siana, Brian
2015-12-20
We present results on the star formation rate (SFR) versus stellar mass (M{sub *}) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M{sub *} (∼10{sup 9.5}–10{sup 11.5} M{sub ⊙}). We find a correlation between log(SFR(Hα)) and log(M{sub *}) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find thatmore » different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)–log(M{sub *}) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))–log(M{sub *}) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))–log(M{sub *}). Based on comparisons to a simulated SFR–M{sub *} relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)– and SFR(UV)–M{sub *} relations to constrain the stochasticity of star formation in high-redshift galaxies.« less
Experimental and theoretical electron-scattering cross-section data for dichloromethane
NASA Astrophysics Data System (ADS)
Krupa, K.; Lange, E.; Blanco, F.; Barbosa, A. S.; Pastega, D. F.; Sanchez, S. d'A.; Bettega, M. H. F.; García, G.; Limão-Vieira, P.; Ferreira da Silva, F.
2018-04-01
We report on a combination of experimental and theoretical investigations into the elastic differential cross sections (DCSs) and integral cross sections for electron interactions with dichloromethane, C H2C l2 , in the incident electron energy over the 7.0-30 eV range. Elastic electron-scattering cross-section calculations have been performed within the framework of the Schwinger multichannel method implemented with pseudopotentials (SMCPP), and the independent-atom model with screening-corrected additivity rule including interference-effects correction (IAM-SCAR+I). The present elastic DCSs have been found to agree reasonably well with the results of IAM-SCAR+I calculations above 20 eV and also with the SMC calculations below 30 eV. Although some discrepancies were found for 7 eV, the agreement between the two theoretical methodologies is remarkable as the electron-impact energy increases. Calculated elastic DCSs are also reported up to 10000 eV for scattering angles from 0° to 180° together with total cross section within the IAM-SCAR+I framework.
NASA Astrophysics Data System (ADS)
Bravo, Jaime J.; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2016-06-01
Quantification of multiple fluorescence markers during neurosurgery has the potential to provide complementary contrast mechanisms between normal and malignant tissues, and one potential combination involves fluorescein sodium (FS) and aminolevulinic acid-induced protoporphyrin IX (PpIX). We focus on the interpretation of reflectance spectra containing contributions from elastically scattered (reflected) photons as well as fluorescence emissions from a strong fluorophore (i.e., FS). A model-based approach to extract μa and μs‧ in the presence of FS emission is validated in optical phantoms constructed with Intralipid (1% to 2% lipid) and whole blood (1% to 3% volume fraction), over a wide range of FS concentrations (0 to 1000 μg/ml). The results show that modeling reflectance as a combination of elastically scattered light and attenuation-corrected FS-based emission yielded more accurate tissue parameter estimates when compared with a nonmodified reflectance model, with reduced maximum errors for blood volume (22% versus 90%), microvascular saturation (21% versus 100%), and μs‧ (13% versus 207%). Additionally, quantitative PpIX fluorescence sampled in the same phantom as FS showed significant differences depending on the reflectance model used to estimate optical properties (i.e., maximum error 29% versus 86%). These data represent a first step toward using quantitative optical spectroscopy to guide surgeries through simultaneous assessment of FS and PpIX.
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Modeling and image reconstruction in spectrally resolved bioluminescence tomography
NASA Astrophysics Data System (ADS)
Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.
2007-02-01
Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.
Total electron scattering cross section from pyridine molecules in the energy range 10-1000 eV
NASA Astrophysics Data System (ADS)
Dubuis, A. Traoré; Costa, F.; da Silva, F. Ferreira; Limão-Vieira, P.; Oller, J. C.; Blanco, F.; García, G.
2018-05-01
We report on experimental total electron scattering cross-section (TCS) from pyridine (C5H5N) for incident electron energies between 10 and 1000 eV, with experimental uncertainties within 5-10%, as measured with a double electrostatic analyser apparatus. The experimental results are compared with our theoretical calculations performed within the independent atom model complemented with a screening corrected additivity rule (IAM-SCAR) procedure which has been updated by including interference effects. A good level of agreement is found between both data sources within the experimental uncertainties. The present TCS results for electron impact energy under study contribute, together with other scattering data available in the literature, to achieve a consistent set of cross section data for modelling purposes.
Simple wavefront correction framework for two-photon microscopy of in-vivo brain
Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.
2015-01-01
We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763
Coupled-channel model for K ¯ N scattering in the resonant region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernández-Ramírez, Cesar; Danilkin, Igor V.; Manley, D. Mark
2016-02-18
Here, we present a unitary multichannel model formore » $$\\bar{K}$$N scattering in the resonance region that fulfills unitarity. It has the correct analytical properties for the amplitudes once they are extended to the complex-$s$ plane and the partial waves have the right threshold behavior. In order to determine the parameters of the model, we have fitted single-energy partial waves up to J = 7/2 and up to 2.15 GeV of energy in the center-of-mass reference frame obtaining the poles of the Λ* and Σ* resonances, which are compared to previous analyses. Furthermore, we provide the most comprehensive picture of the S = –1 hyperon spectrum to date. Here, important differences are found between the available analyses making the gathering of further experimental information on $$\\bar{K}$$N scattering mandatory to make progress in the assessment of the hyperon spectrum.« less
Li, Jiang; Bifano, Thomas G.; Mertz, Jerome
2016-01-01
Abstract. We describe a wavefront sensor strategy for the implementation of adaptive optics (AO) in microscope applications involving thick, scattering media. The strategy is based on the exploitation of multiple scattering to provide oblique back illumination of the wavefront-sensor focal plane, enabling a simple and direct measurement of the flux-density tilt angles caused by aberrations at this plane. Advantages of the sensor are that it provides a large measurement field of view (FOV) while requiring no guide star, making it particularly adapted to a type of AO called conjugate AO, which provides a large correction FOV in cases when sample-induced aberrations arise from a single dominant plane (e.g., the sample surface). We apply conjugate AO here to widefield (i.e., nonscanning) fluorescence microscopy for the first time and demonstrate dynamic wavefront correction in a closed-loop implementation. PMID:27653793
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, Eva E.; Martin, William R.
Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less
Davidson, Eva E.; Martin, William R.
2017-05-26
Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
Flux or speed? Examining speckle contrast imaging of vascular flows
Kazmi, S. M. Shams; Faraji, Ehssan; Davis, Mitchell A.; Huang, Yu-Yen; Zhang, Xiaojing J.; Dunn, Andrew K.
2015-01-01
Speckle contrast imaging enables rapid mapping of relative blood flow distributions using camera detection of back-scattered laser light. However, speckle derived flow measures deviate from direct measurements of erythrocyte speeds by 47 ± 15% (n = 13 mice) in vessels of various calibers. Alternatively, deviations with estimates of volumetric flux are on average 91 ± 43%. We highlight and attempt to alleviate this discrepancy by accounting for the effects of multiple dynamic scattering with speckle imaging of microfluidic channels of varying sizes and then with red blood cell (RBC) tracking correlated speckle imaging of vascular flows in the cerebral cortex. By revisiting the governing dynamic light scattering models, we test the ability to predict the degree of multiple dynamic scattering across vessels in order to correct for the observed discrepancies between relative RBC speeds and multi-exposure speckle imaging estimates of inverse correlation times. The analysis reveals that traditional speckle contrast imagery of vascular flows is neither a measure of volumetric flux nor particle speed, but rather the product of speed and vessel diameter. The corrected speckle estimates of the relative RBC speeds have an average 10 ± 3% deviation in vivo with those obtained from RBC tracking. PMID:26203384
Flux or speed? Examining speckle contrast imaging of vascular flows.
Kazmi, S M Shams; Faraji, Ehssan; Davis, Mitchell A; Huang, Yu-Yen; Zhang, Xiaojing J; Dunn, Andrew K
2015-07-01
Speckle contrast imaging enables rapid mapping of relative blood flow distributions using camera detection of back-scattered laser light. However, speckle derived flow measures deviate from direct measurements of erythrocyte speeds by 47 ± 15% (n = 13 mice) in vessels of various calibers. Alternatively, deviations with estimates of volumetric flux are on average 91 ± 43%. We highlight and attempt to alleviate this discrepancy by accounting for the effects of multiple dynamic scattering with speckle imaging of microfluidic channels of varying sizes and then with red blood cell (RBC) tracking correlated speckle imaging of vascular flows in the cerebral cortex. By revisiting the governing dynamic light scattering models, we test the ability to predict the degree of multiple dynamic scattering across vessels in order to correct for the observed discrepancies between relative RBC speeds and multi-exposure speckle imaging estimates of inverse correlation times. The analysis reveals that traditional speckle contrast imagery of vascular flows is neither a measure of volumetric flux nor particle speed, but rather the product of speed and vessel diameter. The corrected speckle estimates of the relative RBC speeds have an average 10 ± 3% deviation in vivo with those obtained from RBC tracking.
Power corrections to the universal heavy WIMP-nucleon cross section
NASA Astrophysics Data System (ADS)
Chen, Chien-Yi; Hill, Richard J.; Solon, Mikhail P.; Wijangco, Alexander M.
2018-06-01
WIMP-nucleon scattering is analyzed at order 1 / M in Heavy WIMP Effective Theory. The 1 / M power corrections, where M ≫mW is the WIMP mass, distinguish between different underlying UV models with the same universal limit and their impact on direct detection rates can be enhanced relative to naive expectations due to generic amplitude-level cancellations at leading order. The necessary one- and two-loop matching calculations onto the low-energy effective theory for WIMP interactions with Standard Model quarks and gluons are performed for the case of an electroweak SU(2) triplet WIMP, considering both the cases of elementary fermions and composite scalars. The low-velocity WIMP-nucleon scattering cross section is evaluated and compared with current experimental limits and projected future sensitivities. Our results provide the most robust prediction for electroweak triplet Majorana fermion dark matter direct detection rates; for this case, a cancellation between two sources of power corrections yields a small total 1 / M correction, and a total cross section close to the universal limit for M ≳ few × 100GeV. For the SU(2) composite scalar, the 1 / M corrections introduce dependence on underlying strong dynamics. Using a leading chiral logarithm evaluation, the total 1 / M correction has a larger magnitude and uncertainty than in the fermionic case, with a sign that further suppresses the total cross section. These examples provide definite targets for future direct detection experiments and motivate large scale detectors capable of probing to the neutrino floor in the TeV mass regime.
NASA Astrophysics Data System (ADS)
Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael
2018-07-01
Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.
NASA Astrophysics Data System (ADS)
Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.
2018-02-01
We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.
Is nucleon spin structure inconsistent with the constituent quark model?
NASA Astrophysics Data System (ADS)
Qing, Di; Chen, Xiang-Song; Wang, Fan
1998-12-01
Proton spin structure discovered in polarized deep inelastic scattering is shown to be consistent with the valence-sea quark mixing constituent quark model. The relativistic correction and quark-antiquark pair creation (annihilation) terms inherently involved in the quark axial vector current suppress the quark spin contribution to the proton spin. The relativistic quark orbital angular momentum provides compensative terms to keep the proton spin 12 untouched. The tensor charge of the proton is predicted to have a similar but smaller suppression. An explanation on why baryon magnetic moments can be parametrized by the naive quark model spin content as well as the spin structure discovered in polarized deep inelastic scattering is given.
Rayleigh, Compton and K-shell radiative resonant Raman scattering in 83Bi for 88.034 keV γ-rays
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sharma, Veena; Mehta, D.; Singh, Nirmal
2007-11-01
The Rayleigh, Compton and K-shell radiative resonant Raman scattering cross-sections for the 88.034 keV γ-rays have been measured in the 83Bi (K-shell binding energy = 90.526 keV) element. The measurements have been performed at 130° scattering angle using reflection-mode geometrical arrangement involving the 109Cd radioisotope as photon source and an LEGe detector. Computer simulations were exercised to determine distributions of the incident and emission angles, which were further used in evaluation of the absorption corrections for the incident and emitted photons in the target. The measured cross-sections for the Rayleigh scattering are compared with the modified form-factors (MFs) corrected for the anomalous-scattering factors (ASFs) and the S-matrix calculations; and those for the Compton scattering are compared with the Klein-Nishina cross-sections corrected for the non-relativistic Hartree-Fock incoherent scattering function S(x, Z). The ratios of the measured KL2, KL3, KM and KN2,3 radiative resonant Raman scattering cross-sections are found to be in general agreement with those of the corresponding measured fluorescence transition probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prior, P; Timmins, R; Wells, R G
Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less
Johnson, J. R.; Grundy, W.M.; Lemmon, M.T.; Bell, J.F.; Johnson, M.J.; Deen, R.; Arvidson, R. E.; Farrand, W. H.; Guinness, E.; Hayes, A.G.; Herkenhoff, K. E.; Seelos, F.; Soderblom, J.; Squyres, S.
2006-01-01
The Panoramic Camera (Pancam) on the Mars Exploration Rover Opportunity acquired visible/near-infrared multispectral observations of soils and rocks under varying viewing and illumination geometries that were modeled using radiative transfer theory to improve interpretations of the microphysical and surface scattering nature of materials in Meridiani Planum. Nearly 25,000 individual measurements were collected of rock and soil units identified by their color and morphologic properties over a wide range of phase angles (0-150??) at Eagle crater, in the surrounding plains, in Endurance crater, and in the plains between Endurance and Erebus craters through Sol 492. Corrections for diffuse skylight incorporated sky models based on observations of atmospheric opacity throughout the mission. Disparity maps created from Pancam stereo images allowed inclusion of local facet orientation estimates. Outcrop rocks overall exhibited the highest single scattering albedos (???0.9 at 753 nm), and most spherule-rich soils exhibited the lowest (???0.6 at 753 nm). Macroscopic roughness among outcrop rocks varied but was typically larger than spherule-rich soils. Data sets with sufficient phase angle coverage (resulting in well-constrained Hapke parameters) suggested that models using single-term and two-term Henyey-Greenstein phase functions exhibit a dominantly broad backscattering trend for most undisturbed spherule-rich soils. Rover tracks and other compressed soils exhibited forward scattering, while outcrop rocks were intermediate in their scattering behaviors. Some phase functions exhibited wavelength-dependent trends that may result from variations in thin deposits of airfall dust that occurred during the mission. Copyright 2006 by the American Geophysical Union.
Resonance treatment using pin-based pointwise energy slowing-down method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less
Safrani, Avner; Abdulhalim, Ibrahim
2011-06-20
Longitudinal spatial coherence (LSC) is determined by the spatial frequency content of an optical beam. The use of lenses with a high numerical aperture (NA) in full-field optical coherence tomography and a narrowband light source makes the LSC length much shorter than the temporal coherence length, hence suggesting that high-resolution 3D images of biological and multilayered samples can be obtained based on the low LSC. A simplified model is derived, supported by experimental results, which describes the expected interference output signal of multilayered samples when high-NA lenses are used together with a narrowband light source. An expression for the correction factor for the layer thickness determination is found valid for high-NA objectives. Additionally, the method was applied to a strongly scattering layer, demonstrating the potential of this method for high-resolution imaging of scattering media.
NASA Technical Reports Server (NTRS)
Sorensen, Ira J.
1998-01-01
The Thermal Radiation Group, a laboratory in the department of Mechanical Engineering at Virginia Polytechnic Institute and State University, is currently working towards the development of a new technology for cavity-based radiometers. The radiometer consists of a 256-element linear-array thermopile detector mounted on the wall of a mirrored wedgeshaped cavity. The objective of this research is to provide analytical and experimental characterization of the proposed radiometer. A dynamic end-to-end opto-electrothermal model is developed to simulate the performance of the radiometer. Experimental results for prototype thermopile detectors are included. Also presented is the concept of the discrete Green's function to characterize the optical scattering of radiant energy in the cavity, along with a data-processing algorithm to correct for the scattering. Finally, a parametric study of the sensitivity of the discrete Green's function to uncertainties in the surface properties of the cavity is presented.
Collision dynamics of H+ + N2 at low energies based on time-dependent density-functional theory
NASA Astrophysics Data System (ADS)
Yu, W.; Zhang, Y.; Zhang, F. S.; Hutton, R.; Zou, Y.; Gao, C.-Z.; Wei, B.
2018-02-01
Using time-dependent density-functional theory at the level of local density approximation augmented by a self-interaction correction and coupled non-adiabatically to molecular dynamics, we study, from a theoretical perspective, scattering dynamics of the proton in collisions with the N2 molecule at 30 eV. Nine different collision configurations are employed to analyze the proton energy loss spectra, electron depletion, scattering angles and self-interaction effects. Our results agree qualitatively with the experimental data and previous theoretical calculations. The discrepancies are ascribed to the limitation of the theoretical models in use. We find that self-interaction effects can significantly influence the electron capture and the excited diatomic vibrational motion, which is in consistent with other calculations. In addition, it is found that the molecular structure can be readily retrieved from the proton energy loss spectra due to a significant momentum transfer in head-on collisions.
NASA Astrophysics Data System (ADS)
Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias
2017-07-01
In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-07-24
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirnov, V. V.; Hartog, D. J. Den; Duff, J.
2014-11-15
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less
NASA Astrophysics Data System (ADS)
Sunar, Ulas; Rohrbach, Daniel; Morgan, Janet; Zeitouni, Natalie
2013-03-01
Photodynamic Therapy (PDT) has proven to be an effective treatment option for nonmelanoma skin cancers. The ability to quantify the concentration of drug in the treated area is crucial for effective treatment planning as well as predicting outcomes. We utilized spatial frequency domain imaging for quantifying the accurate concentration of protoporphyrin IX (PpIX) in phantoms and in vivo. We correct fluorescence against the effects of native tissue absorption and scattering parameters. First we quantified the absorption and scattering of the tissue non-invasively. Then, we corrected raw fluorescence signal by compensating for optical properties to get the absolute drug concentration. After phantom experiments, we used basal cell carcinoma (BCC) model in Gli mice to determine optical properties and drug concentration in vivo at pre-PDT.
Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.
Zardecki, A; Tam, W G
1982-07-01
The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chien-Yi; Hill, Richard J.; Solon, Mikhail P.
WIMP-nucleon scattering is analyzed at ordermore » $1/M$ in Heavy WIMP Effective Theory. The $1/M$ power corrections, where $$M\\gg m_W$$ is the WIMP mass, distinguish between different underlying UV models with the same universal limit and their impact on direct detection rates can be enhanced relative to naive expectations due to generic amplitude-level cancellations at leading order. The necessary one- and two-loop matching calculations onto the low-energy effective theory for WIMP interactions with Standard Model quarks and gluons are performed for the case of an electroweak SU(2) triplet WIMP, considering both the cases of elementary fermions and composite scalars. The low-velocity WIMP-nucleon scattering cross section is evaluated and compared with current experimental limits and projected future sensitivities. Our results provide the most robust prediction for electroweak triplet Majorana fermion dark matter direct detection rates; for this case, a cancellation between two sources of power corrections yields a small total $1/M$ correction, and a total cross section close to the universal limit for $$M \\gtrsim {\\rm few} \\times 100\\,{\\rm GeV}$$. For the SU(2) composite scalar, the $1/M$ corrections introduce dependence on underlying strong dynamics. Using a leading chiral logarithm evaluation, the total $1/M$ correction has a larger magnitude and uncertainty than in the fermionic case, with a sign that further suppresses the total cross section. These examples provide definite targets for future direct detection experiments and motivate large scale detectors capable of probing to the neutrino floor in the TeV mass regime.« less
NASA Astrophysics Data System (ADS)
Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas
2013-04-01
Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.
NASA Astrophysics Data System (ADS)
Konik, Arda; Madsen, Mark T.; Sunderland, John J.
2012-10-01
In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.
Scattering of dark particles with light mediators
NASA Astrophysics Data System (ADS)
Soper, Davison E.; Spannowsky, Michael; Wallace, Chris J.; Tait, Tim M. P.
2014-12-01
We present a treatment of the high energy scattering of dark Dirac fermions from nuclei, mediated by the exchange of a light vector boson. The dark fermions are produced by proton-nucleus interactions in a fixed target and, after traversing shielding that screens out strongly interacting products, appear similarly to neutrino neutral current scattering in a detector. Using the Fermilab experiment E613 as an example, we place limits on a secluded dark matter scenario. Visible scattering in the detector includes both the familiar regime of large momentum transfer to the nucleus (Q2) described by deeply inelastic scattering, as well as small Q2 kinematics described by the exchanged vector mediator fluctuating into a quark-antiquark pair whose interaction with the nucleus is described by a saturation model. We find that the improved description of the low Q2 scattering leads to important corrections, resulting in more robust constraints in a regime where a description entirely in terms of deeply inelastic scattering cannot be trusted.
Performance of SMARTer at Very Low Scattering Vector q-Range Revealed by Monodisperse Nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putra, E. Giri Rachman; Ikram, A.; Bharoto
2008-03-17
A monodisperse nanoparticle sample of polystyrene has been employed to determine performance of the 36 meter small-angle neutron scattering (SANS) BATAN spectrometer (SMARTer) at the Neutron Scattering Laboratory (NSL)--Serpong, Indonesia, in a very low scattering vector q-range. Detector position at 18 m from sample position, beam stopper of 50 mm in diameter, neutron wavelength of 5.66 A as well as 18 m-long collimator had been set up to achieve very low scattering vector q-range of SMARTer. A polydisperse smeared-spherical particle model was applied to fit the corrected small-angle scattering data of monodisperse polystyrene nanoparticle sample. The mean average of particlemore » radius of 610 A, volume fraction of 0.0026, and polydispersity of 0.1 were obtained from the fitting results. The experiment results from SMARTer are comparable to SANS-J, JAEA - Japan and it is revealed that SMARTer is powerfully able to achieve the lowest scattering vector down to 0.002 A{sup -1}.« less
Estimation of scattering object characteristics for image reconstruction using a nonzero background.
Jin, Jing; Astheimer, Jeffrey; Waag, Robert
2010-06-01
Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.
Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering
NASA Astrophysics Data System (ADS)
Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten
2015-04-01
The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.
NASA Astrophysics Data System (ADS)
Hishiyama, N.; Hoshino, M.; Blanco, F.; García, G.; Tanaka, H.
2017-12-01
We report absolute elastic differential cross sections (DCSs) for electron collisions with phosphorus trifluoride, PF3, molecules (e- + PF3) in the impact energy range of 2.0-200 eV and over a scattering angle range of 10°-150°. Measured angular distributions of scattered electron intensities were normalized by reference to the elastic DCSs of He. Corresponding integral and momentum-transfer cross sections were derived by extrapolating the angular range from 0° to 180° with the help of a modified phase-shift analysis. In addition, due to the large dipole moment of the considered molecule, the dipole-Born correction for the forward scattering angles has also been applied. As a part of this study, independent atom model calculations in combination with screening corrected additivity rule were also performed for elastic and inelastic (electronic excitation plus ionization) scattering using a complex optical potential method. Rotational excitation cross sections have been estimated with a dipole-Born approximation procedure. Vibrational excitations are not considered in this calculation. Theoretical data, at the differential and integral levels, were found to reasonably agree with the present experimental results. Furthermore, we explore the systematics of the elastic DCSs for the four-atomic trifluoride molecules of XF3 (X = B, N, and P) and central P-atom in PF3, showing that, owing to the comparatively small effect of the F-atoms, the present angular distributions of elastic DCSs are essentially dominated by the characteristic of the central P-atom at lower impact energies. Finally, these quantitative results for e- - PF3 collisions were compiled together with the previous data available in the literature in order to obtain a cross section dataset for modeling purposes. To comprehensively describe such a considerable amount of data, we proceed by first discussing, in this paper, the vibrationally elastic scattering processes whereas vibrational and electronic excitation shall be the subject of our following paper devoted to inelastic collisions.
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902
Imaging characteristics of scintimammography using parallel-hole and pinhole collimators
NASA Astrophysics Data System (ADS)
Tsui, B. M. W.; Wessell, D. E.; Zhao, X. D.; Wang, W. T.; Lewis, D. P.; Frey, E. C.
1998-08-01
The purpose of the study is to investigate the imaging characteristics of scintimammography (SM) using parallel-hole (PR) and pinhole (PN) collimators in a clinical setting. Experimental data were acquired from a phantom that models the breast with small lesions using a low energy high resolution (LEHR) PR and a PN collimator. At close distances, the PN collimator provides better spatial resolution and higher detection efficiency than the PR collimator, at the expense of a smaller field-of-view (FOV). Detection of small breast lesions can be further enhanced by noise smoothing, field uniformity correction, scatter subtraction and resolution recovery filtering. Monte Carlo (MC) simulation data were generated from the 3D MCAT phantom that realistically models the Tc-99m sestamibi uptake and attenuation distributions in an average female patient. For both PR and PN collimation, the scatter to primary ratio (S/P) decreases from the base of the breast to the nipple and is higher in the left than right breast due to scatter of photons from the heart. Results from the study add to understanding of the imaging characteristics of SM using PR and PN collimators and assist in the design of data acquisition and image processing methods to enhance the detection of breast lesions using SM.
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Three-dimensional surface profile intensity correction for spatially modulated imaging
NASA Astrophysics Data System (ADS)
Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.
2009-05-01
We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.
NASA Astrophysics Data System (ADS)
Duan, Xueyang
The objective of this dissertation is to develop forward scattering models for active microwave remote sensing of natural features represented by layered media with rough interfaces. In particular, soil profiles are considered, for which a model of electromagnetic scattering from multilayer rough surfaces with or without buried random media is constructed. Starting from a single rough surface, radar scattering is modeled using the stabilized extended boundary condition method (SEBCM). This method solves the long-standing instability issue of the classical EBCM, and gives three-dimensional full wave solutions over large ranges of surface roughnesses with higher computational efficiency than pure numerical solutions, e.g., method of moments (MoM). Based on this single surface solution, multilayer rough surface scattering is modeled using the scattering matrix approach and the model is used for a comprehensive sensitivity analysis of the total ground scattering as a function of layer separation, subsurface statistics, and sublayer dielectric properties. The buried inhomogeneities such as rocks and vegetation roots are considered for the first time in the forward scattering model. Radar scattering from buried random media is modeled by the aggregate transition matrix using either the recursive transition matrix approach for spherical or short-length cylindrical scatterers, or the generalized iterative extended boundary condition method we developed for long cylinders or root-like cylindrical clusters. These approaches take the field interactions among scatterers into account with high computational efficiency. The aggregate transition matrix is transformed to a scattering matrix for the full solution to the layered-medium problem. This step is based on the near-to-far field transformation of the numerical plane wave expansion of the spherical harmonics and the multipole expansion of plane waves. This transformation consolidates volume scattering from the buried random medium with the scattering from layered structure in general. Combined with scattering from multilayer rough surfaces, scattering contributions from subsurfaces and vegetation roots can be then simulated. Solutions of both the rough surface scattering and random media scattering are validated numerically, experimentally, or both. The experimental validations have been carried out using a laboratory-based transmit-receive system for scattering from random media and a new bistatic tower-mounted radar system for field-based surface scattering measurements.
Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.
Tam, W G; Zardecki, A
1982-07-01
Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.
Including Delbrück scattering in GEANT4
NASA Astrophysics Data System (ADS)
Omer, Mohamed; Hajima, Ryoichi
2017-08-01
Elastic scattering of γ-rays is a significant interaction among γ-ray interactions with matter. Therefore, the planning of experiments involving measurements of γ-rays using Monte Carlo simulations usually includes elastic scattering. However, current simulation tools do not provide a complete picture of elastic scattering. The majority of these tools assume Rayleigh scattering is the primary contributor to elastic scattering and neglect other elastic scattering processes, such as nuclear Thomson and Delbrück scattering. Here, we develop a tabulation-based method to simulate elastic scattering in one of the most common open-source Monte Carlo simulation toolkits, GEANT4. We collectively include three processes, Rayleigh scattering, nuclear Thomson scattering, and Delbrück scattering. Our simulation more appropriately uses differential cross sections based on the second-order scattering matrix instead of current data, which are based on the form factor approximation. Moreover, the superposition of these processes is carefully taken into account emphasizing the complex nature of the scattering amplitudes. The simulation covers an energy range of 0.01 MeV ≤ E ≤ 3 MeV and all elements with atomic numbers of 1 ≤ Z ≤ 99. In addition, we validated our simulation by comparing the differential cross sections measured in earlier experiments with those extracted from the simulations. We find that the simulations are in good agreement with the experimental measurements. Differences between the experiments and the simulations are 21% for uranium, 24% for lead, 3% for tantalum, and 8% for cerium at 2.754 MeV. Coulomb corrections to the Delbrück amplitudes may account for the relatively large differences that appear at higher Z values.
X-ray coherent scattering tomography of textured material (Conference Presentation)
NASA Astrophysics Data System (ADS)
Zhu, Zheyuan; Pang, Shuo
2017-05-01
Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.
Soos, Miroslav; Lattuada, Marco; Sefcik, Jan
2009-11-12
In this work we studied the effect of intracluster multiple-light scattering on the scattering properties of a population of fractal aggregates. To do so, experimental data of diffusion-limited aggregation for three polystyrene latexes with similar surface properties but different primary particle diameters (equal to 118, 420, and 810 nm) were obtained by static light scattering and by means of a spectrophotometer. In parallel, a population balance equation (PBE) model, which takes into account the effect of intracluster multiple-light scattering by solving the T-matrix and the mean-field version of T-matrix, was formulated and validated against time evolution of the root mean radius of gyration,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blunden, P. G.; Melnitchouk, W.
We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.
Transport coefficients of Dirac ferromagnet: Effects of vertex corrections
NASA Astrophysics Data System (ADS)
Fujimoto, Junji
2018-03-01
As a strongly spin-orbit-coupled metallic model with ferromagnetism, we have considered an extended Stoner model to the relativistic regime, named Dirac ferromagnet in three dimensions. In a previous paper [J. Fujimoto and H. Kohno, Phys. Rev. B 90, 214418 (2014), 10.1103/PhysRevB.90.214418], we studied the transport properties giving rise to the anisotropic magnetoresistance (AMR) and the anomalous Hall effect (AHE) with the impurity potential being taken into account only as the self-energy. The effects of the vertex corrections (VCs) to AMR and AHE are reported in this paper. AMR is found not to change quantitatively when the VCs are considered, although the transport lifetime is different from the one-electron lifetime and the charge current includes additional contributions from the correlation with spin currents. The side-jump and the skew-scattering contributions to AHE are also calculated. The skew-scattering contribution is dominant in the clean case as can be seen in the spin Hall effect in the nonmagnetic Dirac electron system.
Studies of porous anodic alumina using spin echo scattering angle measurement
NASA Astrophysics Data System (ADS)
Stonaha, Paul
The properties of a neutron make it a useful tool for use in scattering experiments. We have developed a method, dubbed SESAME, in which specially designed magnetic fields encode the scattering signal of a neutron beam into the beam's average Larmor phase. A geometry is presented that delivers the correct Larmor phase (to first order), and it is shown that reasonable variations of the geometry do not significantly affect the net Larmor phase. The solenoids are designed using an analytic approximation. Comparison of this approximate function with finite element calculations and Hall probe measurements confirm its validity, allowing for fast computation of the magnetic fields. The coils were built and tested in-house on the NBL-4 instrument, a polarized neutron reflectometer whose construction is another major portion of this work. Neutron scattering experiments using the solenoids are presented, and the scattering signal from porous anodic alumina is investigated in detail. A model using the Born Approximation is developed and compared against the scattering measurements. Using the model, we define the necessary degree of alignment of such samples in a SESAME measurement, and we show how the signal retrieved using SESAME is sensitive to range of detectable momentum transfer.
NASA Astrophysics Data System (ADS)
Kamikubo, Takashi; Ohnishi, Takayuki; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi; Bai, Shufeng; Wang, Jen-Shiang; Howell, Rafael; Chen, George; Li, Jiangwei; Tao, Jun; Wiley, Jim; Kurosawa, Terunobu; Saito, Yasuko; Takigawa, Tadahiro
2010-09-01
In electron beam writing on EUV mask, it has been reported that CD linearity does not show simple signatures as observed with conventional COG (Cr on Glass) masks because they are caused by scattered electrons form EUV mask itself which comprises stacked heavy metals and thick multi-layers. To resolve this issue, Mask Process Correction (MPC) will be ideally applicable. Every pattern is reshaped in MPC. Therefore, the number of shots would not increase and writing time will be kept within reasonable range. In this paper, MPC is extended to modeling for correction of CD linearity errors on EUV mask. And its effectiveness is verified with simulations and experiments through actual writing test.
Correction for reflected sky radiance in low-altitude coastal hyperspectral images.
Kim, Minsu; Park, Joong Yong; Kopilevich, Yuri; Tuell, Grady; Philpot, William
2013-11-10
Low-altitude coastal hyperspectral imagery is sensitive to reflections of sky radiance at the water surface. Even in the absence of sun glint, and for a calm water surface, the wide range of viewing angles may result in pronounced, low-frequency variations of the reflected sky radiance across the scan line depending on the solar position. The variation in reflected sky radiance can be obscured by strong high-spatial-frequency sun glint and at high altitude by path radiance. However, at low altitudes, the low-spatial-frequency sky radiance effect is frequently significant and is not removed effectively by the typical corrections for sun glint. The reflected sky radiance from the water surface observed by a low-altitude sensor can be modeled in the first approximation as the sum of multiple-scattered Rayleigh path radiance and the single-scattered direct-solar-beam radiance by the aerosol in the lower atmosphere. The path radiance from zenith to the half field of view (FOV) of a typical airborne spectroradiometer has relatively minimal variation and its reflected radiance to detector array results in a flat base. Therefore the along-track variation is mostly contributed by the forward single-scattered solar-beam radiance. The scattered solar-beam radiances arrive at the water surface with different incident angles. Thus the reflected radiance received at the detector array corresponds to a certain scattering angle, and its variation is most effectively parameterized using the downward scattering angle (DSA) of the solar beam. Computation of the DSA must account for the roll, pitch, and heading of the platform and the viewing geometry of the sensor along with the solar ephemeris. Once the DSA image is calculated, the near-infrared (NIR) radiance from selected water scan lines are compared, and a relationship between DSA and NIR radiance is derived. We then apply the relationship to the entire DSA image to create an NIR reference image. Using the NIR reference image and an atmospheric spectral reflectance look-up table, the low spatial frequency variation of the water surface-reflected atmospheric contribution is removed.
Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A. Claude; Gigan, Sylvain; Bourdieu, Laurent
2012-01-01
Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm. PMID:23082292
Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A Claude; Gigan, Sylvain; Bourdieu, Laurent
2012-10-01
Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm.
Resonant Inverse Compton Scattering Spectra from Highly Magnetized Neutron Stars
NASA Astrophysics Data System (ADS)
Wadiasingh, Zorawar; Baring, Matthew G.; Gonthier, Peter L.; Harding, Alice K.
2018-02-01
Hard, nonthermal, persistent pulsed X-ray emission extending between 10 and ∼150 keV has been observed in nearly 10 magnetars. For inner-magnetospheric models of such emission, resonant inverse Compton scattering of soft thermal photons by ultrarelativistic charges is the most efficient production mechanism. We present angle-dependent upscattering spectra and pulsed intensity maps for uncooled, relativistic electrons injected in inner regions of magnetar magnetospheres, calculated using collisional integrals over field loops. Our computations employ a new formulation of the QED Compton scattering cross section in strong magnetic fields that is physically correct for treating important spin-dependent effects in the cyclotron resonance, thereby producing correct photon spectra. The spectral cutoff energies are sensitive to the choices of observer viewing geometry, electron Lorentz factor, and scattering kinematics. We find that electrons with energies ≲15 MeV will emit most of their radiation below 250 keV, consistent with inferred turnovers for magnetar hard X-ray tails. More energetic electrons still emit mostly below 1 MeV, except for viewing perspectives sampling field-line tangents. Pulse profiles may be singly or doubly peaked dependent on viewing geometry, emission locale, and observed energy band. Magnetic pair production and photon splitting will attenuate spectra to hard X-ray energies, suppressing signals in the Fermi-LAT band. The resonant Compton spectra are strongly polarized, suggesting that hard X-ray polarimetry instruments such as X-Calibur, or a future Compton telescope, can prove central to constraining model geometry and physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
Kassianov, Evgueni; Berg, Larry; Pekour, Mikhail; ...
2018-06-12
We examine the performance of our approach for calculating the total scattering coefficient of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. Our extended examination involves airborne in situ data collected by the U.S. Department of Energy’s (DOE) Gulf Stream 1 aircraft during winter over Cape Cod and the western North Atlantic Ocean as part of the Two-Column Aerosol Project (TCAP). The particle population represented by the winter dataset, in contrast with its summer counterpart, contains more hygroscopic particles and particles with an enhanced ability to absorb sunlight due to the larger fraction of black carbon. Moreover,more » the winter observations are characterized by more frequent clouds and a larger fraction of super-micron particles. We calculate model total scattering coefficient at ambient conditions using size spectra measured by optical particle counters (OPCs) and ambient complex refractive index (RI) estimated from measured chemical composition and relative humidity (RH). We demonstrate that reasonable agreement (~20% on average) between the observed and calculated scattering can be obtained under subsaturated ambient conditions (RH < 80%) by applying both screening for clouds and chemical composition data for the RI-based correction of the OPC-derived size spectra.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Dudu; Yang, Sichun; Lu, Lanyuan
2016-06-20
Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassianov, Evgueni; Berg, Larry; Pekour, Mikhail
We examine the performance of our approach for calculating the total scattering coefficient of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. Our extended examination involves airborne in situ data collected by the U.S. Department of Energy’s (DOE) Gulf Stream 1 aircraft during winter over Cape Cod and the western North Atlantic Ocean as part of the Two-Column Aerosol Project (TCAP). The particle population represented by the winter dataset, in contrast with its summer counterpart, contains more hygroscopic particles and particles with an enhanced ability to absorb sunlight due to the larger fraction of black carbon. Moreover,more » the winter observations are characterized by more frequent clouds and a larger fraction of super-micron particles. We calculate model total scattering coefficient at ambient conditions using size spectra measured by optical particle counters (OPCs) and ambient complex refractive index (RI) estimated from measured chemical composition and relative humidity (RH). We demonstrate that reasonable agreement (~20% on average) between the observed and calculated scattering can be obtained under subsaturated ambient conditions (RH < 80%) by applying both screening for clouds and chemical composition data for the RI-based correction of the OPC-derived size spectra.« less
Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.
Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G
2014-07-01
It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.
Titan's Surface Composition from Cassini VIMS Solar Occultation Observations
NASA Astrophysics Data System (ADS)
McCord, Thomas; Hayne, Paul; Sotin, Christophe
2013-04-01
Titan's surface is obscured by a thick absorbing and scattering atmosphere, allowing direct observation of the surface within only a few spectral win-dows in the near-infrared, complicating efforts to identify and map geologi-cally important materials using remote sensing IR spectroscopy. We there-fore investigate the atmosphere's infrared transmission with direct measure-ments using Titan's occultation of the Sun as well as Titan's reflectance measured at differing illumination and observation angles observed by Cas-sini's Visual and Infrared Mapping Spectrometer (VIMS). We use two im-portant spectral windows: the 2.7-2.8-mm "double window" and the broad 5-mm window. By estimating atmospheric attenuation within these windows, we seek an empirical correction factor that can be applied to VIMS meas-urements to estimate the true surface reflectance and map inferred composi-tional variations. Applying the empirical corrections, we correct the VIMS data for the viewing geometry-dependent atmospheric effects to derive the 5-µm reflectance and 2.8/2.7-µm reflectance ratio. We then compare the cor-rected reflectances to compounds proposed to exist on Titan's surface. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scattering in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scatter-ing in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. The narrow 2.75-mm absorption feature, dividing the window into two sub-windows, present in all on-planet measurements is not present in the occultation data, and its strength is reduced at the cloud tops, suggesting the responsible molecule is concentrated in the lower troposphere or on the sur-face. Our empirical correction to Titan's surface reflectance yields properties shifted closer to water ice for the majority of the low-to-mid latitude area covered by VIMS measurements. Four compositional units are defined and mapped on Titan's surface based on the positions of data clusters in 5-mm vs. 2.8/2.7-mm scatter plots; a simple ternary mixture of H2O, hydrocarbons and CO2 might explain the reflectance properties of these surface units. The vast equatorial "dune seas" are compositionally very homogeneous, perhaps suggesting transport and mixing of particles over very large distances and/or and very consistent formation process and source material. The composi-tional branch characterizing Tui Regio and Hotei Regio is consistent with a mixture of typical Titan hydrocarbons and CO2, or possibly methane/ethane; the concentration mechanism proposed is something similar to a terrestrial playa lake evaporate deposit, based on the fact that river channels are known to feed into at least Hotei Regio.
Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2013-12-01
Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.
Conjugate adaptive optics with remote focusing in multiphoton microscopy
NASA Astrophysics Data System (ADS)
Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel
2018-02-01
The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.
Ion mobilities in diatomic gases: measurement versus prediction with non-specular scattering models.
Larriba, Carlos; Hogan, Christopher J
2013-05-16
Ion/electrical mobility measurements of nanoparticles and polyatomic ions are typically linked to particle/ion physical properties through either application of the Stokes-Millikan relationship or comparison to mobilities predicted from polyatomic models, which assume that gas molecules scatter specularly and elastically from rigid structural models. However, there is a discrepancy between these approaches; when specular, elastic scattering models (i.e., elastic-hard-sphere scattering, EHSS) are applied to polyatomic models of nanometer-scale ions with finite-sized impinging gas molecules, predictions are in substantial disagreement with the Stokes-Millikan equation. To rectify this discrepancy, we developed and tested a new approach for mobility calculations using polyatomic models in which non-specular (diffuse) and inelastic gas-molecule scattering is considered. Two distinct semiempirical models of gas-molecule scattering from particle surfaces were considered. In the first, which has been traditionally invoked in the study of aerosol nanoparticles, 91% of collisions are diffuse and thermally accommodating, and 9% are specular and elastic. In the second, all collisions are considered to be diffuse and accommodating, but the average speed of the gas molecules reemitted from a particle surface is 8% lower than the mean thermal speed at the particle temperature. Both scattering models attempt to mimic exchange between translational, vibrational, and rotational modes of energy during collision, as would be expected during collision between a nonmonoatomic gas molecule and a nonfrozen particle surface. The mobility calculation procedure was applied considering both hard-sphere potentials between gas molecules and the atoms within a particle and the long-range ion-induced dipole (polarization) potential. Predictions were compared to previous measurements in air near room temperature of multiply charged poly(ethylene glycol) (PEG) ions, which range in morphology from compact to highly linear, and singly charged tetraalkylammonium cations. It was found that both non-specular, inelastic scattering rules lead to excellent agreement between predictions and experimental mobility measurements (within 5% of each other) and that polarization potentials must be considered to make correct predictions for high-mobility particles/ions. Conversely, traditional specular, elastic scattering models were found to substantially overestimate the mobilities of both types of ions.
Detailed validation of the bidirectional effect in various Case I and Case II waters.
Gleason, Arthur C R; Voss, Kenneth J; Gordon, Howard R; Twardowski, Michael; Sullivan, James; Trees, Charles; Weidemann, Alan; Berthon, Jean-François; Clark, Dennis; Lee, Zhong-Ping
2012-03-26
Simulated bidirectional reflectance distribution functions (BRDF) were compared with measurements made just beneath the water's surface. In Case I water, the set of simulations that varied the particle scattering phase function depending on chlorophyll concentration agreed more closely with the data than other models. In Case II water, however, the simulations using fixed phase functions agreed well with the data and were nearly indistinguishable from each other, on average. The results suggest that BRDF corrections in Case II water are feasible using single, average, particle scattering phase functions, but that the existing approach using variable particle scattering phase functions is still warranted in Case I water.
Interplay of threshold resummation and hadron mass corrections in deep inelastic processes
Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix
2015-02-01
We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less
Total cross section of furfural by electron impact: Experiment and theory.
Traoré Dubuis, A; Verkhovtsev, A; Ellis-Gibbings, L; Krupa, K; Blanco, F; Jones, D B; Brunger, M J; García, G
2017-08-07
We present experimental total cross sections for electron scattering from furfural in the energy range from 10 to 1000 eV, as measured using a double electrostatic analyzer gas cell electron transmission experiment. These results are compared to theoretical data for furfural, as well as to experimental and theoretical values for the structurally similar molecules furan and tetrahydrofuran. The measured total cross section is in agreement with the theoretical results obtained by means of the independent-atom model with screening corrected additivity rule including interference method. In the region of higher electron energies, from 500 eV to 10 keV, the total electron scattering cross section is also estimated using a semi-empirical model based on the number of electrons and dipole polarizabilities of the molecular targets. Together with the recently measured differential and integral cross sections, and the furfural energy-loss spectra, the present total cross section data nearly complete the data set that is required for numerical simulation of low-energy electron processes in furfural, covering the range of projectile energies from a few electron volts up to 10 keV.
Total cross section of furfural by electron impact: Experiment and theory
NASA Astrophysics Data System (ADS)
Traoré Dubuis, A.; Verkhovtsev, A.; Ellis-Gibbings, L.; Krupa, K.; Blanco, F.; Jones, D. B.; Brunger, M. J.; García, G.
2017-08-01
We present experimental total cross sections for electron scattering from furfural in the energy range from 10 to 1000 eV, as measured using a double electrostatic analyzer gas cell electron transmission experiment. These results are compared to theoretical data for furfural, as well as to experimental and theoretical values for the structurally similar molecules furan and tetrahydrofuran. The measured total cross section is in agreement with the theoretical results obtained by means of the independent-atom model with screening corrected additivity rule including interference method. In the region of higher electron energies, from 500 eV to 10 keV, the total electron scattering cross section is also estimated using a semi-empirical model based on the number of electrons and dipole polarizabilities of the molecular targets. Together with the recently measured differential and integral cross sections, and the furfural energy-loss spectra, the present total cross section data nearly complete the data set that is required for numerical simulation of low-energy electron processes in furfural, covering the range of projectile energies from a few electron volts up to 10 keV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Bin; Li, Yongbao; Liu, Bo
Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less
NASA Astrophysics Data System (ADS)
Zieliński, P.; More, M.; Cochon, E.; Lefebvre, J.
1996-03-01
The molecule of benzil (diphenylethanedione, C14H10O2) has been approximated by a system of rigid segments to model the lowest-frequency part of its vibrational spectrum. The interactions of internal degrees of freedom have been described with the use of phenomenological force constants. The structure of the trigonal (P3121) phase has then been modelled by means of a temperature-dependent atom-atom potential based on thermal motions of atoms. The potential gives the correct account of the softening of an E-symmetry, zone-center mode which underlies the phase transition to the low-temperature monoclinic phase (P21). The low-frequency modes at the zone center, supposed until now to be difference overtones, have been shown to result from a coupling between internal and external degrees of freedom. A low-frequency soft mode at the point M of the zone border has been found, which explains the behavior of observed peaks in diffuse x-ray scattering experiments. The values and the temperature evolution of the effective elastic constants calculated within the model are in a very good agreement with the results of ultrasonic and Brillouin scattering data. The model has been shown insufficient in the description of dielectric and piezoelectric properties of benzil.
NASA Astrophysics Data System (ADS)
Bellovary, Jillian M.; Holley-Bockelmann, Kelly; Gultekin, Kayhan; Christensen, Charlotte; Governato, Fabio
2015-01-01
The relation between central black hole mass and stellar spheroid velocity dispersion (the M-Sigma relation) is one of the best-known correlations linking black holes and their host galaxies. However, there is a large amount of scatter at the low-mass end, indicating that the processes that relate black holes to lower-mass hosts are not straightforward. Some of this scatter can be explained by inclination effects; contamination from disk stars along the line of sight can artificially boost velocity dispersion measurements by 30%. Using state of the art simulations, we have developed a correction factor for inclination effects based on purely observational quantities. We present the results of applying these factors to observed samples of galaxies and discuss the effects on the M-Sigma relation.
NASA Astrophysics Data System (ADS)
Teng, Shiwen; Hu, Hanfeng; Liu, Chao; Hu, Fangchao; Wang, Zhenhui; Yin, Yan
2018-07-01
The dual-polarization Doppler weather radar plays an important role in precipitation estimation and weather monitoring. For radar applications, the retrieval of precipitation microphysical characteristics is of great importance, and requires assumed scattering properties of raindrops. This study numerically investigates the scattering properties of raindrops and considers the capability of numerical models for raindrop scattering simulations. Besides the widely used spherical and oblate spheroid models, a non-spheroidal model based on realistic raindrop geometries with a flattened base and a smoothly rounded top is also considered. To study the effects of scattering simulations on radar applications, the polarization radar parameters are modeled based on the scattering properties calculated by different scattering models (i.e. the extended boundary condition T-matrix (EBCM) method and discretize dipole approximation (DDA)) and given size distributions, and compared with observations of a C-band dual-polarization radar. Note that, when the spatial resolution of the DDA simulation is large enough, the DDA results can be very close to those of the EBCM. Most simulated radar variables, except copolar correlation coefficient, match closely with radar observations, and the results based on different non-spheroidal models considered in this study show little differences. The comparison indicates that, even for the C-band radar, the effects of raindrop shape and canting angle on scattering properties are relatively minor due to relatively small size parameters. However, although more realistic particle geometry model may provide better representation on raindrop shape, considering the relatively time-consuming and complex scattering simulations for those particles, the oblate spheroid model with appropriate axis ratio variation is suggested for polarization radar applications.
Fully relativistic form factor for Thomson scattering.
Palastro, J P; Ross, J S; Pollock, B; Divol, L; Froula, D H; Glenzer, S H
2010-03-01
We derive a fully relativistic form factor for Thomson scattering in unmagnetized plasmas valid to all orders in the normalized electron velocity, beta[over ]=v[over ]/c. The form factor is compared to a previously derived expression where the lowest order electron velocity, beta[over], corrections are included [J. Sheffield, (Academic Press, New York, 1975)]. The beta[over ] expansion approach is sufficient for electrostatic waves with small phase velocities such as ion-acoustic waves, but for electron-plasma waves the phase velocities can be near luminal. At high phase velocities, the electron motion acquires relativistic corrections including effective electron mass, relative motion of the electrons and electromagnetic wave, and polarization rotation. These relativistic corrections alter the scattered emission of thermal plasma waves, which manifest as changes in both the peak power and width of the observed Thomson-scattered spectra.
Airborne Aerosol in Situ Measurements during TCAP: A Closure Study of Total Scattering
Kassianov, Evgueni I.; Berg, Larry K.; Pekour, Mikhail S.; ...
2015-07-31
We present here a framework for calculating the total scattering of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. The synergistically employed aircraft data involve aerosol microphysical, chemical, and optical components and ambient relative humidity measurements. Our framework is developed emphasizing the explicit use of the complementary chemical composition data for estimating the complex refractive index (RI) of particles, and thus obtaining improved ambient size spectra derived from Optical Particle Counter (OPC) measurements. The feasibility of our framework for improved calculations of total aerosol scattering is demonstrated for different ambient conditions with a wide range of relativemore » humidities (from 5 to 80%) using three types of data collected by the U.S. Department of Energy (DOE) G-1 aircraft during the recent Two-Column Aerosol Project (TCAP). Namely, these three types of data employed are: (1) size distributions measured by an Ultra High Sensitivity Aerosol Spectrometer (UHSAS; 0.06-1 µm), a Passive Cavity Aerosol Spectrometer (PCASP; 0.1-3 µm) and a Cloud and Aerosol Spectrometer (CAS; 0.6- >10 µm), (2) chemical composition data measured by an Aerosol Mass Spectrometer (AMS; 0.06-0.6 µm) and a Single Particle Soot Photometer (SP2; 0.06-0.6 µm), and (3) the dry total scattering coefficient measured by a TSI integrating nephelometer at three wavelengths (0.45, 0.55, 0.7 µm) and scattering enhancement factor measured with a humidification system at three RHs (near 45%, 65% and 90%) at a single wavelength (0.525 µm). We demonstrate that good agreement (~10% on average) between the observed and calculated scattering at these three wavelengths can be obtained using the best available chemical composition data for the RI-based correction of the OPC-derived size spectra. We also demonstrate that ignoring the RI-based correction and using non-representative RI values can cause a substantial underestimation (~40% on average) and overestimation (~35% on average) of the calculated total scattering, respectively.« less
Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.
Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein
2018-02-01
A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-08-01
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.
Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia
2015-08-01
Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.
NASA Astrophysics Data System (ADS)
Porter, J. M.; Jeffries, J. B.; Hanson, R. K.
2009-09-01
A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.
Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering
Moteabbed, Maryam; Niroula, Megh; Raue, Brian A.; ...
2013-08-30
The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has themore » potential to make precise measurements over a broad range in Q 2 and scattering angles. We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q 2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for < Q 2 >=0.206 GeV 2 and 0.830 ≤ ε ≤ 0.943. We have demonstrated that the tertiary e ± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q 2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e + and e - beams.« less
Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering
NASA Astrophysics Data System (ADS)
Moteabbed, M.; Niroula, M.; Raue, B. A.; Weinstein, L. B.; Adikaram, D.; Arrington, J.; Brooks, W. K.; Lachniet, J.; Rimal, Dipak; Ungaro, M.; Afanasev, A.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Kim, A.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lewis, S.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Martinez, D.; Mayer, M.; McKinnon, B.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moriya, K.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S.; Strauch, S.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2013-08-01
Background: The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections.Purpose: The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has the potential to make precise measurements over a broad range in Q2 and scattering angles.Methods: We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton.Results: The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for
NASA Astrophysics Data System (ADS)
Engström, J. E.; Leck, C.
2011-08-01
The presented filter-based optical method for determination of soot (light absorbing carbon or Black Carbon, BC) can be implemented in the field under primitive conditions and at low cost. This enables researchers with small economical means to perform monitoring at remote locations, especially in the Asia where it is much needed. One concern when applying filter-based optical measurements of BC is that they suffer from systematic errors due to the light scattering of non-absorbing particles co-deposited on the filter, such as inorganic salts and mineral dust. In addition to an optical correction of the non-absorbing material this study provides a protocol for correction of light scattering based on the chemical quantification of the material, which is a novelty. A newly designed photometer was implemented to measure light transmission on particle accumulating filters, which includes an additional sensor recording backscattered light. The choice of polycarbonate membrane filters avoided high chemical blank values and reduced errors associated with length of the light path through the filter. Two protocols for corrections were applied to aerosol samples collected at the Maldives Climate Observatory Hanimaadhoo during episodes with either continentally influenced air from the Indian/Arabian subcontinents (winter season) or pristine air from the Southern Indian Ocean (summer monsoon). The two ways of correction (optical and chemical) lowered the particle light absorption of BC by 63 to 61 %, respectively, for data from the Arabian Sea sourced group, resulting in median BC absorption coefficients of 4.2 and 3.5 Mm-1. Corresponding values for the South Indian Ocean data were 69 and 97 % (0.38 and 0.02 Mm-1). A comparison with other studies in the area indicated an overestimation of their BC levels, by up to two orders of magnitude. This raises the necessity for chemical correction protocols on optical filter-based determinations of BC, before even the sign on the radiative forcing based on their effects can be assessed.
NASA Technical Reports Server (NTRS)
1990-01-01
Various papers on remote sensing (RS) for the nineties are presented. The general topics addressed include: subsurface methods, radar scattering, oceanography, microwave models, atmospheric correction, passive microwave systems, RS in tropical forests, moderate resolution land analysis, SAR geometry and SNR improvement, image analysis, inversion and signal processing for geoscience, surface scattering, rain measurements, sensor calibration, wind measurements, terrestrial ecology, agriculture, geometric registration, subsurface sediment geology, radar modulation mechanisms, radar ocean scattering, SAR calibration, airborne radar systems, water vapor retrieval, forest ecosystem dynamics, land analysis, multisensor data fusion. Also considered are: geologic RS, RS sensor optical measurements, RS of snow, temperature retrieval, vegetation structure, global change, artificial intelligence, SAR processing techniques, geologic RS field experiment, stochastic modeling, topography and Digital Elevation model, SAR ocean waves, spaceborne lidar and optical, sea ice field measurements, millimeter waves, advanced spectroscopy, spatial analysis and data compression, SAR polarimetry techniques. Also discussed are: plant canopy modeling, optical RS techniques, optical and IR oceanography, soil moisture, sea ice back scattering, lightning cloud measurements, spatial textural analysis, SAR systems and techniques, active microwave sensing, lidar and optical, radar scatterometry, RS of estuaries, vegetation modeling, RS systems, EOS/SAR Alaska, applications for developing countries, SAR speckle and texture.
Post-PRK corneal scatter measurements with a scanning confocal slit photon counter
NASA Astrophysics Data System (ADS)
Taboada, John; Gaines, David; Perez, Mary A.; Waller, Steve G.; Ivan, Douglas J.; Baldwin, J. Bruce; LoRusso, Frank; Tutt, Ronald C.; Perez, Jose; Tredici, Thomas; Johnson, Dan A.
2000-06-01
Increased corneal light scatter or 'haze' has been associated with excimer laser photorefractive surgery of the cornea. The increased scatter can affect visual performance; however, topical steroid treatment post surgery substantially reduces the post PRK scatter. For the treatment and monitoring of the scattering characteristics of the cornea, various methods have been developed to objectively measure the magnitude of the scatter. These methods generally can measure scatter associated with clinically observable levels of haze. For patients with moderate to low PRK corrections receiving steroid treatment, measurement becomes fairly difficult as the haze clinical rating is non observable. The goal of this development was to realize an objective, non-invasive physical measurement that could produce a significant reading for any level including the background present in a normal cornea. As back-scatter is the only readily accessible observable, the instrument is based on this measurement. To achieve this end required the use of a confocal method to bias out the background light that would normally confound conventional methods. A number of subjects with nominal refractive errors in an Air Force study have undergone PRK surgery. A measurable increase in corneal scatter has been observed in these subjects whereas clinical ratings of the haze were noted as level zero. Other favorable aspects of this back-scatter based instrument include an optical capability to perform what is equivalent to an optical A-scan of the anterior chamber. Lens scatter can also be measured.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
Lückerath, R; Woyde, M; Meier, W; Stricker, W; Schnell, U; Magel, H C; Görres, J; Spliethoff, H; Maier, H
1995-06-20
Mobile coherent anti-Stokes Raman-scattering equipment was applied for single-shot temperature measurements in a pilot-scale furnace with a thermal power of 300 kW, fueled with either natural gas or coal dust. Average temperatures deduced from N(2) coherent anti-Stokes Raman-scattering spectra were compared with thermocouple readings for identical flame conditions. There were evident differences between the results of both techniques, mainly in the case of the natural-gas flame. For the coal-dust flame, a strong influence of an incoherent and a coherent background, which led to remarkable changes in the spectral shape of the N(2)Q-branch spectra, was observed. Therefore an algorithm had to be developed to correct the coal-dust flame spectra before evaluation. The measured temperature profiles at two different planes in the furnace were compared with model calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed
2014-01-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less
Transmittance and scattering during wound healing after refractive surgery
NASA Astrophysics Data System (ADS)
Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.
2004-10-01
Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Anisotropic reflectance from turbid media. I. Theory.
Neuman, Magnus; Edström, Per
2010-05-01
It is shown that the intensity of light reflected from plane-parallel turbid media is anisotropic in all situations encountered in practice. The anisotropy, in the form of higher intensity at large polar angles, increases when the amount of near-surface bulk scattering is increased, which dominates in optically thin and highly absorbing media. The only situation with isotropic intensity is when a non-absorbing infinitely thick medium is illuminated diffusely. This is the only case where the Kubelka-Munk model gives exact results and there exists an exact translation between Kubelka-Munk and general radiative transfer. This also means that a bulk scattering perfect diffusor does not exist. Angle-resolved models are thus crucial for a correct understanding of light scattering in turbid media. The results are derived using simulations and analytical calculations. It is also shown that there exists an optimal angle for directional detection that minimizes the error introduced when using the Kubelka-Munk model to interpret reflectance measurements with diffuse illumination.
3D Tomographic SAR Imaging in Densely Vegetated Mountainous Rural Areas in China and Sweden
NASA Astrophysics Data System (ADS)
Feng, L.; Muller, J. P., , Prof
2017-12-01
3D SAR Tomography (TomoSAR) and 4D SAR Differential Tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to create an important new innovation of SAR Interferometry, to unscramble complex scenes with multiple scatterers mapped into the same SAR cell. In addition to this 3-D shape reconstruction and deformation solution in complex urban/infrastructure areas, and recent cryospheric ice investigations, emerging tomographic remote sensing applications include forest applications, e.g. tree height and biomass estimation, sub-canopy topographic mapping, and even search, rescue and surveillance. However, these scenes are characterized by temporal decorrelation of scatterers, orbital, tropospheric and ionospheric phase distortion and an open issue regarding possible height blurring and accuracy losses for TomoSAR applications particularly in densely vegetated mountainous rural areas. Thus, it is important to develop solutions for temporal decorrelation, orbital, tropospheric and ionospheric phase distortion.We report here on 3D imaging (especially in vertical layers) over densely vegetated mountainous rural areas using 3-D SAR imaging (SAR tomography) derived from data stacks of X-band COSMO-SkyMed Spotlight and L band ALOS-1 PALSAR data stacks over Dujiangyan Dam, Sichuan, China and L and P band airborne SAR data (BioSAR 2008 - ESA) in the Krycklan river catchment, Northern Sweden. The new TanDEM-X 12m DEM is used to assist co - registration of all the data stacks over China first. Then, atmospheric correction is being assessed using weather model data such as ERA-I, MERRA, MERRA-2, WRF; linear phase-topography correction and MODIS spectrometer correction will be compared and ionospheric correction methods are discussed to remove tropospheric and ionospheric delay. Then the new TomoSAR method with the TanDEM-X 12m DEM is described to obtain the number of scatterers inside each pixel, the scattering amplitude and phase of each scatterer and finally extract tomograms (imaging), their 3D positions and motion parameters (deformation). A progress report will be shown on these different aspects.This work is partially supported by the CSC and UCL MAPS Dean prize through a PhD studentship at UCL-MSSL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azuri, Asaf; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il
2015-07-07
In-plane two and three dimensional diffraction patterns are computed for the vertical scattering of an Ar atom from a frozen LiF(100) surface. Suitable collimation of the incoming wavepacket serves to reveal the quantum mechanical diffraction. The interaction potential is based on a fit to an ab initio potential calculated using density functional theory with dispersion corrections. Due to the potential coupling found between the two horizontal surface directions, there are noticeable differences between the quantum angular distributions computed for two and three dimensional scattering. The quantum results are compared to analogous classical Wigner computations on the same surface and withmore » the same conditions. The classical dynamics largely provides the envelope for the quantum diffractive scattering. The classical results also show that the corrugation along the [110] direction of the surface is smaller than along the [100] direction, in qualitative agreement with experimental observations of unimodal and bimodal scattering for the [110] and [100] directions, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Brian B.; Kirkegaard, Marie C.; Miskowiec, Andrew J.
Uranyl fluoride (UO 2F 2) is a hygroscopic powder with two main structural phases: an anhydrous crystal and a partially hydrated crystal of the same R¯3m symmetry. The formally closed-shell electron structure of anhydrous UO 2F 2 is amenable to density functional theory calculations. We use density functional perturbation theory (DFPT) to calculate the vibrational frequencies of the anhydrous crystal structure and employ complementary inelastic neutron scattering and temperature-dependent Raman scattering to validate those frequencies. As a model closed-shell actinide, we investigated the effect of LDA, GGA, and non-local vdW functionals as well as the spherically-averaged Hubbard +U correction onmore » vibrational frequencies, electronic structure, and geometry of anhydrous UO 2F 2. A particular choice of U eff = 5.5 eV yields the correct U Oyl bond distance and vibrational frequencies for the characteristic Eg and A1g modes that are within the resolution of experiment. Inelastic neutron scattering and Raman scattering suggest a degree of water coupling to the lattice vibrations in the more experimentally accessible partially hydrated UO 2F 2 system, with the symmetric O-U-O stretching vibration shifted approximately 47 cm -1 lower in energy compared to the anhydrous structure. Evidence of water interaction with the uranyl ion is present from a two-peak decomposition of the uranyl stretching vibration in the Raman spectra and anion hydrogen stretching vibrations in the inelastic neutron scattering spectra. A first-order dehydration phase transition temperature is definitively identified to be 125 °C using temperature-dependent Raman scattering.« less
Vibrational Properties of Anhydrous and Partially Hydrated Uranyl Fluoride
Anderson, Brian B.; Kirkegaard, Marie C.; Miskowiec, Andrew J.; ...
2017-01-01
Uranyl fluoride (UO 2F 2) is a hygroscopic powder with two main structural phases: an anhydrous crystal and a partially hydrated crystal of the same R¯3m symmetry. The formally closed-shell electron structure of anhydrous UO 2F 2 is amenable to density functional theory calculations. We use density functional perturbation theory (DFPT) to calculate the vibrational frequencies of the anhydrous crystal structure and employ complementary inelastic neutron scattering and temperature-dependent Raman scattering to validate those frequencies. As a model closed-shell actinide, we investigated the effect of LDA, GGA, and non-local vdW functionals as well as the spherically-averaged Hubbard +U correction onmore » vibrational frequencies, electronic structure, and geometry of anhydrous UO 2F 2. A particular choice of U eff = 5.5 eV yields the correct U Oyl bond distance and vibrational frequencies for the characteristic Eg and A1g modes that are within the resolution of experiment. Inelastic neutron scattering and Raman scattering suggest a degree of water coupling to the lattice vibrations in the more experimentally accessible partially hydrated UO 2F 2 system, with the symmetric O-U-O stretching vibration shifted approximately 47 cm -1 lower in energy compared to the anhydrous structure. Evidence of water interaction with the uranyl ion is present from a two-peak decomposition of the uranyl stretching vibration in the Raman spectra and anion hydrogen stretching vibrations in the inelastic neutron scattering spectra. A first-order dehydration phase transition temperature is definitively identified to be 125 °C using temperature-dependent Raman scattering.« less
Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice
2017-01-01
The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Synchronous atmospheric radiation correction of GF-2 satellite multispectral image
NASA Astrophysics Data System (ADS)
Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan
2018-02-01
GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.
Effect of void shape in Czochralski-Si wafers on the intensity of laser-scattering
NASA Astrophysics Data System (ADS)
Takahashi, J.; Kawakami, K.; Nakai, K.
2001-06-01
The shape effect of anisotropic-shaped microvoid defects in Czochralski-grown silicon wafers on the intensity of laser scattering has been investigated. The size and shape of the defects were examined by means of transmission electron microscopy. Octahedral voids in conventional (nitrogen-undoped) wafers showed an almost isotropic scattering property under the incident condition of a p-polarization beam. On the other hand, parallelepiped-plate-shaped voids in nitrogen-doped wafers showed an anisotropic scattering property on both p- and s-polarized components of scattered light, depending strongly on the incident laser direction. The measured results were explained not by scattering calculation using Born approximation but by calculation based on Rayleigh scattering. It was found that the s component is explained by an inclination of a dipole moment induced on a defect from the scattering plane. Furthermore, using numerical electromagnetic analysis it was shown that the asymmetric behavior of the s component on the parallelepiped-plate voids is ascribed to the parallelepiped shape effect. These results suggest that correction of the scattering intensity is necessary to evaluate the size and volume of anisotropic-shaped defects from the scattered intensity.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara
2017-12-01
In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
NASA Astrophysics Data System (ADS)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten
2016-08-01
A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.
Event-Based Processing of Neutron Scattering Data
Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...
2015-09-16
Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less
Collision Models for Particle Orbit Code on SSX
NASA Astrophysics Data System (ADS)
Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.
2011-10-01
Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.
NASA Astrophysics Data System (ADS)
Fioretti, Valentina; Mineo, Teresa; Bulgarelli, Andrea; Dondero, Paolo; Ivanchenko, Vladimir; Lei, Fan; Lotti, Simone; Macculi, Claudio; Mantero, Alfonso
2017-12-01
Low energy protons (< 300 keV) can enter the field of view of X-ray telescopes, scatter on their mirror surfaces at small incident angles, and deposit energy on the detector. This phenomenon can cause intense background flares at the focal plane decreasing the mission observing time (e.g. the XMM-Newton mission) or in the most extreme cases, damaging the X-ray detector. A correct modelization of the physics process responsible for the grazing angle scattering processes is mandatory to evaluate the impact of such events on the performance (e.g. observation time, sensitivity) of future X-ray telescopes as the ESA ATHENA mission. The Remizovich model describes particles reflected by solids at glancing angles in terms of the Boltzmann transport equation using the diffuse approximation and the model of continuous slowing down in energy. For the first time this solution, in the approximation of no energy losses, is implemented, verified, and qualitatively validated on top of the Geant4 release 10.2, with the possibility to add a constant energy loss to each interaction. This implementation is verified by comparing the simulated proton distribution to both the theoretical probability distribution and with independent ray-tracing simulations. Both the new scattering physics and the Coulomb scattering already built in the official Geant4 distribution are used to reproduce the latest experimental results on grazing angle proton scattering. At 250 keV multiple scattering delivers large proton angles and it is not consistent with the observation. Among the tested models, the single scattering seems to better reproduce the scattering efficiency at the three energies but energy loss obtained at small scattering angles is significantly lower than the experimental values. In general, the energy losses obtained in the experiment are higher than what obtained by the simulation. The experimental data are not completely representative of the soft proton scattering experienced by current X-ray telescopes because of the lack of measurements at low energies (< 200 keV) and small reflection angles, so we are not able to address any of the tested models as the one that can certainly reproduce the scattering behavior of low energy protons expected for the ATHENA mission. We can, however, discard multiple scattering as the model able to reproduce soft proton funnelling, and affirm that Coulomb single scattering can represent, until further measurements at lower energies are available, the best approximation of the proton scattered angular distribution at the exit of X-ray optics.
Stochastic Sampling in the IMF of Galactic Open Clusters
NASA Astrophysics Data System (ADS)
Kay, Christina; Hancock, M.; Canalizo, G.; Smith, B. J.; Giroux, M. L.
2010-01-01
We sought observational evidence of the effects of stochastic sampling of the initial mass function by investigating the integrated colors of a sample of Galactic open clusters. In particular we looked for scatter in the integrated (V-K) color as previous research resulted in little scatter in the (U-B) and (B-V) colors. Combining data from WEBDA and 2MASS we determined three different colors for 287 open clusters. Of these clusters, 39 have minimum uncertainties in age and formed a standard set. A plot of the (V-K) color versus age showed much more scatter than the (U-B) versus age. We also divided the sample into two groups based on a lowest luminosity limit which is a function of age and V magnitude. We expected the group of clusters fainter than this limit to show more scatter than the brighter group. Assuming the published ages, we compared the reddening corrected observed colors to those predicted by Starburst99. The presence of stochastic sampling should increase scatter in the distribution of the differences between observed and model colors of the fainter group relative to the brighter group. However, we found that K-S tests cannot rule out that the distribution of color difference for the brighter and fainter sets come from the same parent distribution. This indistinguishabilty may result from uncertainties in the parameters used to define the groups. This result constrains the size of the effects of stochastic sampling of the initial mass function.
Computing the scatter component of mammographic images.
Highnam, R P; Brady, J M; Shepstone, B J
1994-01-01
The authors build upon a technical report (Tech. Report OUEL 2009/93, Engng. Sci., Oxford Uni., Oxford, UK, 1993) in which they proposed a model of the mammographic imaging process for which scattered radiation is a key degrading factor. Here, the authors propose a way of estimating the scatter component of the signal at any pixel within a mammographic image, and they use this estimate for model-based image enhancement. The first step is to extend the authors' previous model to divide breast tissue into "interesting" (fibrous/glandular/cancerous) tissue and fat. The scatter model is then based on the idea that the amount of scattered radiation reaching a point is related to the energy imparted to the surrounding neighbourhood. This complex relationship is approximated using published empirical data, and it varies with the size of the breast being imaged. The approximation is further complicated by needing to take account of extra-focal radiation and breast edge effects. The approximation takes the form of a weighting mask which is convolved with the total signal (primary and scatter) to give a value which is input to a "scatter function", approximated using three reference cases, and which returns a scatter estimate. Given a scatter estimate, the more important primary component can be calculated and used to create an image recognizable by a radiologist. The images resulting from this process are clearly enhanced, and model verification tests based on an estimate of the thickness of interesting tissue present proved to be very successful. A good scatter model opens the was for further processing to remove the effects of other degrading factors, such as beam hardening.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Intrinsic to extrinsic phonon lifetime transition in a GaAs-AlAs superlattice.
Hofmann, F; Garg, J; Maznev, A A; Jandl, A; Bulsara, M; Fitzgerald, E A; Chen, G; Nelson, K A
2013-07-24
We have measured the lifetimes of two zone-center longitudinal acoustic phonon modes, at 320 and 640 GHz, in a 14 nm GaAs/2 nm AlAs superlattice structure. By comparing measurements at 296 and 79 K we separate the intrinsic contribution to phonon lifetime determined by phonon-phonon scattering from the extrinsic contribution due to defects and interface roughness. At 296 K, the 320 GHz phonon lifetime has approximately equal contributions from intrinsic and extrinsic scattering, whilst at 640 GHz it is dominated by extrinsic effects. These measurements are compared with intrinsic and extrinsic scattering rates in the superlattice obtained from first-principles lattice dynamics calculations. The calculated room-temperature intrinsic lifetime of longitudinal phonons at 320 GHz is in agreement with the experimentally measured value of 0.9 ns. The model correctly predicts the transition from predominantly intrinsic to predominantly extrinsic scattering; however the predicted transition occurs at higher frequencies. Our analysis indicates that the 'interfacial atomic disorder' model is not entirely adequate and that the observed frequency dependence of the extrinsic scattering rate is likely to be determined by a finite correlation length of interface roughness.
NASA Astrophysics Data System (ADS)
Hu, Yingtian; Liu, Chao; Wang, Xiaoping; Zhao, Dongdong
2018-06-01
At present the general scatter handling methods are unsatisfactory when scatter and fluorescence seriously overlap in excitation emission matrix. In this study, an adaptive method for scatter handling of fluorescence data is proposed. Firstly, the Raman scatter was corrected by subtracting the baseline of deionized water which was collected in each experiment to adapt to the intensity fluctuations. Then, the degrees of spectral overlap between Rayleigh scatter and fluorescence were classified into three categories based on the distance between the spectral peaks. The corresponding algorithms, including setting to zero, fitting on single or both sides, were implemented after the evaluation of the degree of overlap for individual emission spectra. The proposed method minimized the number of fitting and interpolation processes, which reduced complexity, saved time, avoided overfitting, and most importantly assured the authenticity of data. Furthermore, the effectiveness of this procedure on the subsequent PARAFAC analysis was assessed and compared to Delaunay interpolation by conducting experiments with four typical organic chemicals and real water samples. Using this method, we conducted long-term monitoring of tap water and river water near a dyeing and printing plant. This method can be used for improving adaptability and accuracy in the scatter handling of fluorescence data.
A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results
NASA Technical Reports Server (NTRS)
Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.
NASA Astrophysics Data System (ADS)
Thelen, J.-C.; Havemann, S.; Taylor, J. P.
2012-06-01
Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS) or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging spectrometer operated by the NASA Jet Propulsion Laboratory.
NASA Astrophysics Data System (ADS)
Devito, R. P.; Khoa, Dao T.; Austin, Sam M.; Berg, U. E. P.; Loc, Bui Minh
2012-02-01
Background: Analysis of data involving nuclei far from stability often requires the optical potential (OP) for neutron scattering. Because neutron data are seldom available, whereas proton scattering data are more abundant, it is useful to have estimates of the difference of the neutron and proton optical potentials. This information is contained in the isospin dependence of the nucleon OP. Here we attempt to provide it for the nucleon-208Pb system.Purpose: The goal of this paper is to obtain accurate n+208Pb scattering data and use it, together with existing p+208Pb and 208Pb(p,n)208BiIAS* data, to obtain an accurate estimate of the isospin dependence of the nucleon OP at energies in the 30-60-MeV range.Method: Cross sections for n+208Pb scattering were measured at 30.4 and 40.0 MeV, with a typical relative (normalization) accuracy of 2-4% (3%). An angular range of 15∘ to 130∘ was covered using the beam-swinger time-of-flight system at Michigan State University. These data were analyzed by a consistent optical-model study of the neutron data and of elastic p+208Pb scattering at 45 and 54 MeV. These results were combined with a coupled-channel analysis of the 208Pb(p,n) reaction at 45 MeV, exciting the 0+ isobaric analog state (IAS) in 208Bi.Results: The new data and analysis give an accurate estimate of the isospin impurity of the nucleon-208Pb OP at 30.4 MeV caused by the Coulomb correction to the proton OP. The corrections to the real proton OP given by the CH89 global systematics were found to be only a few percent, whereas for the imaginary potential it was greater than 20% at the nuclear surface. On the basis of the analysis of the measured elastic n+208Pb data at 40 MeV, a Coulomb correction of similar strength and shape was also predicted for the p+208Pb OP at energies around 54 MeV.Conclusions: Accurate neutron scattering data can be used in combination with proton scattering data and (p,n) charge exchange data leading to the IAS to obtain reliable estimates of the isospin impurity of the nucleon OP.
Nonlinear scattering of ultrashort laser pulses on two-level system
NASA Astrophysics Data System (ADS)
Astapenko, Valery A.; Sakhno, Sergey V.
2015-05-01
The presentation is devoted to the theoretical investigation of nonlinear scattering of ultrashort electromagnetic pulses (USP) on two-level quantum system. We consider the scattering of several types of USP, namely, so called corrected Gaussian pulse (CGP) and cosine wavelet pulse. Such pulses have no constant component in their spectrum in contrast with traditional Gaussian pulse. It should be noted that the presence of constant component in the limit of ultrashort pulse durations leads to unphysical results. The main purpose of the present work is the investigation of the change of pulse temporal shape after scattering as a function of initial phase at different distances from the target. Numerical calculations are based on the solution of Bloch equations and expression for scattering field strength via dipole moment of two-level system exposed by the action of incident USP. In our calculation we also account for the influence of refracting index of the air on electric field strength in the pulse after scattering.
The Weak Charge of the Proton. A Search For Physics Beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacEwan, Scott J.
2015-05-01
The Q weak experiment, which completed running in May of 2012 at Jefferson Laboratory, has measured the parity-violating asymmetry in elastic electron-proton scattering at four-momentum transfer Q 2 =0.025 (GeV/c) 2 in order to provide the first direct measurement of the proton's weak charge, Q W p. The Standard Model makes firm predictions for the weak charge; deviations from the predicted value would provide strong evidence of new physics beyond the Standard Model. Using an 89% polarized electron beam at 145 microA scattering from a 34.4 cm long liquid hydrogen target, scattered electrons were detected using an array of eightmore » fused-silica detectors placed symmetric about the beam axis. The parity-violating asymmetry was then measured by reversing the helicity of the incoming electrons and measuring the normalized difference in rate seen in the detectors. The low Q 2 enables a theoretically clean measurement; the higher-order hadronic corrections are constrained using previous parity-violating electron scattering world data. The experimental method will be discussed, with recent results constituting 4% of our total data and projections of our proposed uncertainties on the full data set.« less
Inverse Compton Scattering in Mildly Relativistic Plasma
NASA Technical Reports Server (NTRS)
Molnar, S. M.; Birkinshaw, M.
1998-01-01
We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters (k(sub B)T(sub e) greater than or approximately equal to 15 keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.
Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena
2008-07-08
We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less
Non-cancellation of electroweak logarithms in high-energy scattering
Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...
2015-01-01
We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter
Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as amore » function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.« less
Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter; ...
2015-12-11
Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as amore » function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.« less
Scattering property based contextual PolSAR speckle filter
NASA Astrophysics Data System (ADS)
Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred
2017-12-01
Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.
NASA Technical Reports Server (NTRS)
Miller, Mark A.; Reynolds, R. M.; Bartholomew, Mary Jane
2001-01-01
The aerosol scattering component of the total radiance measured at the detectors of ocean color satellites is determined with atmospheric correction algorithms. These algorithms are based on aerosol optical thickness measurements made in two channels that lie in the near-infrared portion of the electromagnetic spectrum. The aerosol properties in the near-infrared region are used because there is no significant contribution to the satellite-measured radiance from the underlying ocean surface in that spectral region. In the visible wavelength bands, the spectrum of radiation scattered from the turbid atmosphere is convolved with the spectrum of radiation scattered from the surface layers of the ocean. The radiance contribution made by aerosols in the visible bands is determined from the near-infrared measurements through the use of aerosol models and radiation transfer codes. Selection of appropriate aerosol models from the near-infrared measurements is a fundamental challenge. There are several challenges with respect to the development, improvement, and evaluation of satellite ocean-color atmospheric correction algorithms. A common thread among these challenges is the lack of over-ocean aerosol data. Until recently, one of the most important limitations has been the lack of techniques and instruments to make aerosol measurements at sea. There has been steady progress in this area over the past five years, and there are several new and promising devices and techniques for data collection. The development of new instruments and the collection of more aerosol data from over the world's oceans have brought the realization that aerosol measurements that can be directly compared with aerosol measurements from ocean color satellite measurements are difficult to obtain. There are two problems that limit these types of comparisons: the cloudiness of the atmosphere over the world's oceans and the limitations of the techniques and instruments used to collect aerosol data from ships. To address the latter, we have developed a new type of shipboard sun photometer.
Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.
2016-12-01
Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.
Quadratic electroweak corrections for polarized Moller scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Global optical model potential for A=3 projectiles
NASA Astrophysics Data System (ADS)
Pang, D. Y.; Roussel-Chomaz, P.; Savajols, H.; Varner, R. L.; Wolski, R.
2009-02-01
A global optical model potential (GDP08) for He3 projectiles has been obtained by simultaneously fitting the elastic scattering data of He3 from targets of 40⩽AT⩽209 at incident energies of 30⩽Einc⩽217 MeV. Uncertainties and correlation coefficients between the global potential parameters were obtained by using the bootstrap statistical method. GDP08 was found to satisfactorily account for the elastic scattering of H3 as well, which makes it a global optical potential for the A=3 nuclei. Optical model calculations using the GDP08 global potential are compared with the experimental angular distributions of differential cross sections for He3-nucleus and H3-nucleus scattering from different targets of 6⩽AT⩽232 at incident energies of 4⩽Einc⩽450 MeV. The optical potential for the doubly-magic nucleus Ca40, the low-energy correction to the real potential for nuclei with 58≲AT≲120 at Einc<30 MeV, the comparison with double-folding model calculations and the CH89 potential, and the spin-orbit potential parameters are discussed.
Nguyen, Hieu Cong; Jung, Jaehoon; Lee, Jungbin; Choi, Sung-Uk; Hong, Suk-Young; Heo, Joon
2015-07-31
The reflectance of the Earth's surface is significantly influenced by atmospheric conditions such as water vapor content and aerosols. Particularly, the absorption and scattering effects become stronger when the target features are non-bright objects, such as in aqueous or vegetated areas. For any remote-sensing approach, atmospheric correction is thus required to minimize those effects and to convert digital number (DN) values to surface reflectance. The main aim of this study was to test the three most popular atmospheric correction models, namely (1) Dark Object Subtraction (DOS); (2) Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) and (3) the Second Simulation of Satellite Signal in the Solar Spectrum (6S) and compare them with Top of Atmospheric (TOA) reflectance. By using the k-Nearest Neighbor (kNN) algorithm, a series of experiments were conducted for above-ground forest biomass (AGB) estimations of the Gongju and Sejong region of South Korea, in order to check the effectiveness of atmospheric correction methods for Landsat ETM+. Overall, in the forest biomass estimation, the 6S model showed the bestRMSE's, followed by FLAASH, DOS and TOA. In addition, a significant improvement of RMSE by 6S was found with images when the study site had higher total water vapor and temperature levels. Moreover, we also tested the sensitivity of the atmospheric correction methods to each of the Landsat ETM+ bands. The results confirmed that 6S dominates the other methods, especially in the infrared wavelengths covering the pivotal bands for forest applications. Finally, we suggest that the 6S model, integrating water vapor and aerosol optical depth derived from MODIS products, is better suited for AGB estimation based on optical remote-sensing data, especially when using satellite images acquired in the summer during full canopy development.
NASA Astrophysics Data System (ADS)
Oelze, Michael L.; O'Brien, William D.
2004-11-01
Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .
NASA Technical Reports Server (NTRS)
Karam, Mostafa A.; Amar, Faouzi; Fung, Adrian K.
1993-01-01
The Wave Scattering Research Center at the University of Texas at Arlington has developed a scattering model for forest or vegetation, based on the theory of electromagnetic-wave scattering in random media. The model generalizes the assumptions imposed by earlier models, and compares well with measurements from several forest canopies. This paper gives a description of the model. It also indicates how the model elements are integrated to obtain the scattering characteristics of different forest canopies. The scattering characteristics may be displayed in the form of polarimetric signatures, represented by like- and cross-polarized scattering coefficients, for an elliptically-polarized wave, or in the form of signal-distribution curves. Results illustrating both types of scattering characteristics are given.
Quantitation of tumor uptake with molecular breast imaging.
Bache, Steven T; Kappadath, S Cheenu
2017-09-01
We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kučinskas, A.; Prakapavičius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.
2018-03-01
Context. The atmospheres of cool stars are temporally and spatially inhomogeneous due to the effects of convection. The influence of this inhomogeneity, referred to as granulation, on colours has never been investigated over a large range of effective temperatures and gravities. Aim. We aim to study, in a quantitative way, the impact of granulation on colours. Methods: We use the CIFIST (Cosmological Impact of the FIrst Stars) grid of CO5BOLD (COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions, L = 2, 3) hydrodynamical models to compute emerging fluxes. These in turn are used to compute theoretical colours in the UBV RI, 2MASS, HIPPARCOS, Gaia and SDSS systems. Every CO5BOLD model has a corresponding one dimensional (1D) plane-parallel LHD (Lagrangian HydroDynamics) model computed for the same atmospheric parameters, which we used to define a "3D correction" that can be applied to colours computed from fluxes computed from any 1D model atmosphere code. As an example, we illustrate these corrections applied to colours computed from ATLAS models. Results: The 3D corrections on colours are generally small, of the order of a few hundredths of a magnitude, yet they are far from negligible. We find that ignoring granulation effects can lead to underestimation of Teff by up to 200 K and overestimation of gravity by up to 0.5 dex, when using colours as diagnostics. We have identified a major shortcoming in how scattering is treated in the current version of the CIFIST grid, which could lead to offsets of the order 0.01 mag, especially for colours involving blue and UV bands. We have investigated the Gaia and HIPPARCOS photometric systems and found that the (G - Hp), (BP - RP) diagram is immune to the effects of granulation. In addition, we point to the potential of the RVS photometry as a metallicity diagnostic. Conclusions: Our investigation shows that the effects of granulation should not be neglected if one wants to use colours as diagnostics of the stellar parameters of F, G, K stars. A limitation is that scattering is treated as true absorption in our current computations, thus our 3D corrections are likely an upper limit to the true effect. We are already computing the next generation of the CIFIST grid, using an approximate treatment of scattering. The appendix tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A68
Re-evaluation of model-based light-scattering spectroscopy for tissue spectroscopy
Lau, Condon; Šćepanović, Obrad; Mirkovic, Jelena; McGee, Sasha; Yu, Chung-Chieh; Fulghum, Stephen; Wallace, Michael; Tunnell, James; Bechtel, Kate; Feld, Michael
2009-01-01
Model-based light scattering spectroscopy (LSS) seemed a promising technique for in-vivo diagnosis of dysplasia in multiple organs. In the studies, the residual spectrum, the difference between the observed and modeled diffuse reflectance spectra, was attributed to single elastic light scattering from epithelial nuclei, and diagnostic information due to nuclear changes was extracted from it. We show that this picture is incorrect. The actual single scattering signal arising from epithelial nuclei is much smaller than the previously computed residual spectrum, and does not have the wavelength dependence characteristic of Mie scattering. Rather, the residual spectrum largely arises from assuming a uniform hemoglobin distribution. In fact, hemoglobin is packaged in blood vessels, which alters the reflectance. When we include vessel packaging, which accounts for an inhomogeneous hemoglobin distribution, in the diffuse reflectance model, the reflectance is modeled more accurately, greatly reducing the amplitude of the residual spectrum. These findings are verified via numerical estimates based on light propagation and Mie theory, tissue phantom experiments, and analysis of published data measured from Barrett’s esophagus. In future studies, vessel packaging should be included in the model of diffuse reflectance and use of model-based LSS should be discontinued. PMID:19405760
NASA Astrophysics Data System (ADS)
Hashimoto, G. L.; Roos-Serote, M.; Sugita, S.
2004-11-01
We evaluate the spatial variation of venusian surface emissivity at a near-infrared wavelength using multispectral images obtained by the Near-Infrared Mapping Spectrometer (NIMS) on board the Galileo spacecraft. The Galileo made a close flyby to Venus in February 1990. During this flyby, NIMS observed the nightside of Venus with 17 spectral channels, which includes the well-known spectral windows at 1.18, 1.74, and 2.3 μ m. The surface emissivity is evaluated at 1.18 μ m, at which thermal radiation emitted from the planetary surface could be detected. To analyze the NIMS observations, synthetic spectra have been generated by means of a line-by-line radiative transfer program which includes both scattering and absorption. We used the discrete ordinate method to calculate the spectra of vertically inhomogeneous plane-parallel atmosphere. Gas opacity is calculated based on the method of Pollack et al. (1993), though binary absorption coefficients for continuum opacity are adjusted to achieve an acceptable fit to the NIMS data. We used Mie scattering theory and a cloud model developed by Pollack et al. (1993) to determine the single scattering albedo and scattering phase function of the cloud particles. The vertical temperature profile of Venus International Reference Atmosphere (VIRA) is used in all our calculations. The procedure of the analysis is the followings. We first made a correction for emission angle. Then, a modulation of emission by the cloud opacities is removed using simultaneously measured 1.74 and 2.3 μ m radiances. The resulting images are correlated with the topographic map of Magellan. To search for variations in surface emissivity, this cloud corrected images are divided by synthetic radiance maps that were created from the Magellan data. This work has been supported by The 21st Century COE Program of Origin and Evolution of Planetary Systems of Ministry of Education, Culture, Sports, Science and Technology (MEXT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, Katherine E.; Van Dokkum, Pieter G.; Brammer, Gabriel
2010-08-20
With a complete, mass-selected sample of quiescent galaxies from the NEWFIRM Medium-Band Survey, we study the stellar populations of the oldest and most massive galaxies (>10{sup 11} M{sub sun}) to high redshift. The sample includes 570 quiescent galaxies selected based on their extinction-corrected U - V colors out to z = 2.2, with accurate photometric redshifts, {sigma} {sub z}/(1 + z) {approx} 2%, and rest-frame colors, {sigma}{sub U-V} {approx} 0.06 mag. We measure an increase in the intrinsic scatter of the rest-frame U - V colors of quiescent galaxies with redshift. This scatter in color arises from the spread inmore » ages of the quiescent galaxies, where we see both relatively quiescent red, old galaxies and quiescent blue, younger galaxies toward higher redshift. The trends between color and age are consistent with the observed composite rest-frame spectral energy distributions (SEDs) of these galaxies. The composite SEDs of the reddest and bluest quiescent galaxies are fundamentally different, with remarkably well-defined 4000 A and Balmer breaks, respectively. Some of the quiescent galaxies may be up to four times older than the average age and up to the age of the universe, if the assumption of solar metallicity is correct. By matching the scatter predicted by models that include growth of the red sequence by the transformation of blue galaxies to the observed intrinsic scatter, the data indicate that most early-type galaxies formed their stars at high redshift with a burst of star formation prior to migrating to the red sequence. The observed U - V color evolution with redshift is weaker than passive evolution predicts; possible mechanisms to slow the color evolution include increasing amounts of dust in quiescent galaxies toward higher redshift, red mergers at z {approx}< 1, and a frosting of relatively young stars from star formation at later times.« less
Ainslie, Michael A; Leighton, Timothy G
2009-11-01
The scattering cross-section sigma(s) of a gas bubble of equilibrium radius R(0) in liquid can be written in the form sigma(s)=4piR(0) (2)[(omega(1) (2)omega(2)-1)(2)+delta(2)], where omega is the excitation frequency, omega(1) is the resonance frequency, and delta is a frequency-dependent dimensionless damping coefficient. A persistent discrepancy in the frequency dependence of the contribution to delta from radiation damping, denoted delta(rad), is identified and resolved, as follows. Wildt's [Physics of Sound in the Sea (Washington, DC, 1946), Chap. 28] pioneering derivation predicts a linear dependence of delta(rad) on frequency, a result which Medwin [Ultrasonics 15, 7-13 (1977)] reproduces using a different method. Weston [Underwater Acoustics, NATO Advanced Study Institute Series Vol. II, 55-88 (1967)], using ostensibly the same method as Wildt, predicts the opposite relationship, i.e., that delta(rad) is inversely proportional to frequency. Weston's version of the derivation of the scattering cross-section is shown here to be the correct one, thus resolving the discrepancy. Further, a correction to Weston's model is derived that amounts to a shift in the resonance frequency. A new, corrected, expression for the extinction cross-section is also derived. The magnitudes of the corrections are illustrated using examples from oceanography, volcanology, planetary acoustics, neutron spallation, and biomedical ultrasound. The corrections become significant when the bulk modulus of the gas is not negligible relative to that of the surrounding liquid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less
Hwang, Dusun; Yoon, Dong-Jin; Kwon, Il-Bum; Seo, Dae-Cheol; Chung, Youngjoo
2010-05-10
A novel method for auto-correction of fiber optic distributed temperature sensor using anti-Stokes Raman back-scattering and its reflected signal is presented. This method processes two parts of measured signal. One part is the normal back scattered anti-Stokes signal and the other part is the reflected signal which eliminate not only the effect of local losses due to the micro-bending or damages on fiber but also the differential attenuation. Because the beams of the same wavelength are used to cancel out the local variance in transmission medium there is no differential attenuation inherently. The auto correction concept was verified by the bending experiment on different bending points. (c) 2010 Optical Society of America.
Miller, Joseph D; Roy, Sukesh; Slipchenko, Mikhail N; Gord, James R; Meyer, Terrence R
2011-08-01
High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.
NASA Astrophysics Data System (ADS)
Miller, Joseph D.; Roy, Sukesh; Slipchenko, Mikhail N.; Gord, James R.; Meyer, Terrence R.
2011-08-01
High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.
Aerosol scattering and absorption modulation transfer function
NASA Astrophysics Data System (ADS)
Sadot, Dan; Kopeika, Norman S.
1993-08-01
Recent experimental measurements of overall atmospheric modulation transfer function (MTF) indicate significant difference between the turbulence and overall atmospheric MTFs, except often at midday when turbulence is strong. We suggest here a physical explanation for those results which essentially relates to what we call a practical instrumentation-based atmospheric aerosol MTF which is a modification of the classical aerosol MTF theory. It is shown that system field-of-view and dynamic range affect strongly aerosol and overall atmospheric MTFs. It is often necessary to choose between MTF and SNR depending upon dynamic range requirements. Also, a new approach regarding aerosol absorption is presented. It is shown that aerosol-absorbed irradiance is spatial frequency dependent and enhances the degradation in image quality arising from received scattered light. This is most relevant for thermal imaging. An analytically corrected model for the aerosol MTF is presented which is relevant for imaging. An important conclusion is that the aerosol MTF is often the dominant part in the actual overall atmospheric MTF all across the optical spectral region.
NASA Astrophysics Data System (ADS)
Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling
2016-10-01
In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\
NASA Astrophysics Data System (ADS)
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
Lamare, F; Le Maitre, A; Dawood, M; Schäfers, K P; Fernandez, P; Rimoldi, O E; Visvikis, D
2014-07-01
Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET) acquisitions. A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
An empirical correction for moderate multiple scattering in super-heterodyne light scattering.
Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas
2017-05-28
Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1996-01-01
Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, William Scott
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Satellite estimation of surface spectral ultraviolet irradiance using OMI data in East Asia
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, J.; Jeong, U.
2017-12-01
Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface ultraviolet (UV) irradiance is important nowadays. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UV irradiance estimation. Also UV index (UVI) is a simple parameter to show the strength of surface UV irradiance, therefore UVI has been widely utilized for the purpose of UV monitoring. In this study, we estimate surface UV irradiance at East Asia using realistic input based on OMI Total Ozone and reflectivity, and then validate this estimated comparing to UV irradiance from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) data. In this work, we also try to develop our own retrieval algorithm for better estimation of surface irradiance. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UV irradiance calculation. The input to the VLIDORT radiative transfer calculations are the total ozone column (TOMS V7 climatology), the surface albedo (Herman and Celarier, 1997) and the cloud optical depth. Based on these, the UV irradiance is calculated based on look-up table (LUT) approach. To correct absorbing aerosol, UV irradiance algorithm added climatological aerosol information (Arola et al., 2009). The further study, we analyze the comprehensive uncertainty analysis based on LUT and all input parameters.
Lunar single-scattering, porosity, and surface-roughness properties with SMART-1/AMIE
NASA Astrophysics Data System (ADS)
Parviainen, H.; Muinonen, K.; Näränen, J.; Josset, J.-L.; Beauvivre, S.; Pinet, P.; Chevrel, S.; Koschny, D.; Grieger, B.; Foing, B.
2009-04-01
We analyze the single-scattering albedo and phase function, local surface roughness and regolith porosity, and the coherent backscattering, single scattering, and shadowing contributions to the opposition effect for specific lunar mare regions imaged by the SMART-1/AMIE camera. We account for shadowing due to surface roughness and mutual shadowing among the regolith particles with ray-tracing computations for densely-packed particulate media with a fractional-Brownian-motion interface with free space. The shadowing modeling allows us to derive the hundred-micron-scale volume-element scattering phase function for the lunar mare regolith. We explain the volume-element phase function by a coherent-backscattering model, where the single scatterers are the submicron-to-micron-scale particle inhomogeneities and/or the smallest particles on the lunar surface. We express the single-scatterer phase function as a sum of three Henyey-Greenstein terms, accounting for increased backward scattering in both narrow and wide angular ranges. The Moon exhibits an opposition effect, that is, a nonlinear increase of disk-integrated brightness with decreasing solar phase angle, the angle between the Sun and the observer as seen from the object. Recently, the coherent-backscattering mechanism (CBM) has been introduced to explain the opposition effect. CBM is a multiple-scattering interference mechanism, where reciprocal waves propagating through the same scatterers in opposite directions always interfere constructively in the backward-scattering direction but with varying interference characteristics in other directions. In addition to CBM, mutual shadowing among regolith particles (SMp) and rough-surface shadowing (SMr) have their effect on the behavior of the observed lunar surface brightness. In order to accrue knowledge on the volume-element and, ultimately, single-scattering properties of the lunar regolith, both SMp and SMr need to be accurately accounted for. We included four different lunar mare regions in our study. Each of these regions covers several hundreds of square kilometers of lunar surface. When selecting the regions, we have required that they have been imaged by AMIE across a wide range of phase angles, including the opposition geometry. The phase-angle range covered is 0-109 °, with incidence and emergence angles (ι and ε) ranging within 7-87 ° and 0-53 °, respectively. The pixel scale varies from 288m down to 29m. Biases and dark currents were subtracted from the images in the usual way, followed by a flat-field correction. New dark-current reduction procedures have recently been derived from in-flight measurements to replace the ground-calibration images . The clear filter was chosen for the present study as it provides the largest field of view and is currently the best-calibrated channel. Off-nadir-pointing observations allowed for the extensive phase-angle coverage. In total, 220 images are used for the present study. The photometric data points were extracted as follows. First, on average, 50 sample areas of 10 Ã- 10 pixels were chosen by hand from each image. Second, the surface normal, ι, ε, °, and α were computed for each pixel in each sample area using the NASA/NAIF SPICE software toolkit with the latest and corrected SMART-1/AMIE SPICE kernels. Finally, the illumination angles and the observed intensity were averaged over each sample area. In total, the images used in the study resulted in approximately 11000 photometric sample points for the four mare regions. We make use of fractional-Brownian-motion surfaces in modeling the interface between free space and regolith and a size distribution of spherical particles in modeling the particulate medium. We extract the effects of the stochastic geometry from the lunar photometry and, simultaneously, obtain the volume-element scattering phase function of the lunar regolith locations studied. The volume-element phase function allows us to constrain the physical properties of the regolith particles. Based on the present theoretical modeling of the lunar photometry from SMART-1/AMIE, we conclude that most of the lunar mare opposition effect is caused by coherent backscattering and single scattering within volume elements comparable to lunar particle sizes, with only a small contribution from shadowing effects. We thus suggest that the lunar single scatterers exhibit intensity enhancement towards the backward scattering direction in resemblance to the scattering characteristics experimentally measured and theoretically computed for realistic small particles. Further interpretations of the lunar volume-element phase function will be the subject of future research.
A weak-scattering model for turbine-tone haystacking
NASA Astrophysics Data System (ADS)
McAlpine, A.; Powles, C. J.; Tester, B. J.
2013-08-01
Noise and emissions are critical technical issues in the development of aircraft engines. This necessitates the development of accurate models to predict the noise radiated from aero-engines. Turbine tones radiated from the exhaust nozzle of a turbofan engine propagate through turbulent jet shear layers which causes scattering of sound. In the far-field, measurements of the tones may exhibit spectral broadening, where owing to scattering, the tones are no longer narrow band peaks in the spectrum. This effect is known colloquially as 'haystacking'. In this article a comprehensive analytical model to predict spectral broadening for a tone radiated through a circular jet, for an observer in the far field, is presented. This model extends previous work by the authors which considered the prediction of spectral broadening at far-field observer locations outside the cone of silence. The modelling uses high-frequency asymptotic methods and a weak-scattering assumption. A realistic shear layer velocity profile and turbulence characteristics are included in the model. The mathematical formulation which details the spectral broadening, or haystacking, of a single-frequency, single azimuthal order turbine tone is outlined. In order to validate the model, predictions are compared with experimental results, albeit only at polar angle equal to 90°. A range of source frequencies from 4 to 20kHz, and jet velocities from 20 to 60ms-1, are examined for validation purposes. The model correctly predicts how the spectral broadening is affected when the source frequency and jet velocity are varied.
Online quantitative analysis of multispectral images of human body tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.
2013-08-01
A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P
2015-01-01
The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves monitored at post-veraison and harvest was also built up, reaching a 77.08% of correctly classified samples. The outcomes obtained demonstrate the capability of using a reliable method for fast, in-field, non-destructive grapevine varietal classification that could be very useful in viticulture and wine industry, either global or site-specific.
Fluorescence-based observations provide useful, sensitive information concerning the nature and distribution of colored dissolved organic matter (CDOM) in coastal and freshwater environments. The excitation-emission matrix (EEM) technique has become widely used for evaluating sou...
Sanzolone, R.F.
1986-01-01
An inductively coupled plasma atomic fluorescence spectrometric method is described for the determination of six elements in a variety of geological materials. Sixteen reference materials are analysed by this technique to demonstrate its use in geochemical exploration. Samples are decomposed with nitric, hydrofluoric and hydrochloric acids, and the residue dissolved in hydrochloric acid and diluted to volume. The elements are determined in two groups based on compatibility of instrument operating conditions and consideration of crustal abundance levels. Cadmium, Cu, Pb and Zn are determined as a group in the 50-ml sample solution under one set of instrument conditions with the use of scatter correction. Limitations of the scatter correction technique used with the fluorescence instrument are discussed. Iron and Mn are determined together using another set of instrumental conditions on a 1-50 dilution of the sample solution without the use of scatter correction. The ranges of concentration (??g g-1) of these elements in the sample that can be determined are: Cd, 0.3-500; Cu, 0.4-500; Fe, 85-250 000; Mn, 45-100 000; Pb, 5-10 000; and Zn, 0.4-300. The precision of the method is usually less than 5% relative standard deviation (RSD) over a wide concentration range and acceptable accuracy is shown by the agreement between values obtained and those recommended for the reference materials.
Scatter and veiling glare corrections for quantitative digital subtraction angiography
NASA Astrophysics Data System (ADS)
Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin
1994-05-01
In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.
Determination of total phenolic compounds in compost by infrared spectroscopy.
Cascant, M M; Sisouane, M; Tahiri, S; Krati, M El; Cervera, M L; Garrigues, S; de la Guardia, M
2016-06-01
Middle and near infrared (MIR and NIR) were applied to determine the total phenolic compounds (TPC) content in compost samples based on models built by using partial least squares (PLS) regression. The multiplicative scatter correction, standard normal variate and first derivative were employed as spectra pretreatment, and the number of latent variable were optimized by leave-one-out cross-validation. The performance of PLS-ATR-MIR and PLS-DR-NIR models was evaluated according to root mean square error of cross validation and prediction (RMSECV and RMSEP), the coefficient of determination for prediction (Rpred(2)) and residual predictive deviation (RPD) being obtained for this latter values of 5.83 and 8.26 for MIR and NIR, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Reflectance and fluorescence spectroscopies in photodynamic therapy
NASA Astrophysics Data System (ADS)
Finlay, Jarod C.
In vivo fluorescence spectroscopy during photodynamic therapy (PDT) has the potential to provide information on the distribution and degradation of sensitizers, the formation of fluorescent photoproducts and changes in tissue autofluorescence induced by photodynamic treatment. Reflectance spectroscopy allows quantification of light absorption and scattering in tissue. We present the results of several related studies of fluorescence and reflectance spectroscopy and their applications to photodynamic dosimetry. First, we develop and test an empirical method for the correction of the distortions imposed on fluorescence spectra by absorption and scattering in turbid media. We characterize the irradiance dependence of the in vivo photobleaching of three sensitizers, protoporphyrin IX (PpIX), Photofrin and mTHPC, in a rat skin model. The photobleaching and photoproduct formation of PpIX exhibit irradiance dependence consistent with singlet oxygen (1O2)-mediated bleaching. The bleaching of mTHPC occurs in two phases, only one of which is consistent with a 1O 2-mediated mechanism. Photofrin's bleaching is independent of irradiance, although its photoproduct formation is not. This can be explained by a mixed-mechanism bleaching model. Second, we develop an algorithm for the determination of tissue optical properties using diffuse reflectance spectra measured at a single source-detector separation and demonstrate the recovery of the hemoglobin oxygen dissociation curve from tissue-simulating phantoms containing human erythrocytes. This method is then used to investigate the heterogeneity of oxygenation response in murine tumors induced by carbogen inhalation. We find that while the response varies among animals and within each tumor, the majority of tumors exhibit an increase in blood oxygenation during carbogen breathing. We present a forward-adjoint model of fluorescence propagation that uses the optical property information acquired from reflectance spectroscopy to obtain the undistorted fluorescence spectrum over a wide range of optical properties. Finally, we investigate the ability of the forward-adjoint theory to extract undistorted fluorescence and optical property information simultaneously from a single measured fluorescence spectrum. This method can recover the hemoglobin oxygen dissociation curve in tissue-simulating phantoms with an accuracy comparable to that of reflectance-based methods while correcting distortions in the fluorescence over a wide range of absorption and scattering coefficients.
A Backscattering Enhanced Microwave Canopy Scattering Model Based On MIMICS
NASA Astrophysics Data System (ADS)
Shen, X.; Hong, Y.; Qin, Q.; Chen, S.; Grout, T.
2010-12-01
For modeling microwave scattering of vegetated areas, several microwave canopy scattering models, based on the vectorized radiative transfer equation (VRT) that use different solving techniques, have been proposed in the past three decades. As an iterative solution of VRT at low orders, the Michigan Microwave Canopy Scattering Model (MIMICS) gives an analytical expression for calculating scattering as long as the volume scattering is not too strong. The most important usage of such models is to predict scattering in the backscattering direction. Unfortunately, the simplified assumption of MIMICS is that the scattering between the ground and trunk layers only includes the specular reflection. As a result, MIMICS includes a dominant coherent term which vanishes in the backscattering direction because this term contains a delta function factor of zero in this direction. This assumption needs reconsideration for accurately calculating the backscattering. In the framework of MIMICS, any incoherent terms that involve surface scattering factors must at least undergo surface scattering twice and volume scattering once. Therefore, these incoherent terms are usually very weak. On the other hand, due to the phenomenon of backscattering enhancement, the surface scattering in the backscattering direction is very strong compared to most other directions. Considering the facts discussed above, it is reasonable to add a surface backscattering term to the last equation of the boundary conditions of MIMICS. More terms appear in the final result including a backscattering coherent term which enhances the backscattering. The modified model is compared with the original MIMICS (version 1.0) using JPL/AIRSAR data from NASA Campaign Soil Moisture Experimental 2003 (SMEX03) and Washita92. Significant improvement is observed.
NASA Astrophysics Data System (ADS)
Otarola, Angel; Neichel, Benoit; Wang, Lianqi; Boyer, Corinne; Ellerbroek, Brent; Rigaut, François
2013-12-01
Large aperture ground-based telescopes require Adaptive Optics (AO) to correct for the distortions induced by atmospheric turbulence and achieve diffraction limited imaging quality. These AO systems rely on Natural and Laser Guide Stars (NGS and LGS) to provide the information required to measure the wavefront from the astronomical sources under observation. In particular one such LGS method consists in creating an artificial star by means of fluorescence of the sodium atoms at the altitude of the Earth's mesosphere. This is achieved by propagating one or more lasers, at the wavelength of the Na D2a resonance, from the telescope up to the mesosphere. Lasers can be launched from either behind the secondary mirror or from the perimeter of the main aperture. The so-called central- and side-launch systems, respectively. The central-launch system, while helpful to reduce the LGS spot elongation, introduces the so-called "fratricide" effect. This consists of an increase in the photon-noise in the AO Wave Front Sensors (WFS) sub-apertures, with photons that are the result of laser photons back-scattering from atmospheric molecules (Rayleigh scattering) and atmospheric aerosols (dust and/or cirrus clouds ice particles). This affects the performance of the algorithms intended to compute the LGS centroids and subsequently compute and correct the turbulence-induced wavefront distortions. In the frame of the Thirty Meter Telescope (TMT) project and using actual LGS WFS data obtained with the Gemini Multi-Conjugate Adaptive Optics System (Gemini MCAO a.k.a. GeMS), we show results from an analysis of the temporal variability of the observed fratricide effect, as well as comparison of the absolute magnitude of fratricide photon-flux level with simulations using models that account for molecular (Rayleigh) scattering and photons backscattered from cirrus clouds.
Models for electromagnetic scattering from the sea at extremely low grazing angles
NASA Astrophysics Data System (ADS)
Wetzel, Lewis B.
1987-12-01
The present state of understanding in the field of low-grazing-angle sea scatter is reviewed and extended. The important concept of shadowing is approached from the point of view of diffraction theory, and limits in wind speed and radar frequency are found for the application of shadowing theories based on geometrical optics. The implications of shadowing function based on illumination thresholding are shown to compare favorably with a variety of experimental results. Scattering from the exposed surface peaks is treated by a composite-surface Bragg model, and by wedge models using both physical optics and the method of equivalent currents. Curiously, the scattering levels predicted by these widely different approximations are all in fairly good agreement with experimental values for moderately low grazing angles (about 5 deg), with the physical optics wedge model being superior at 1 deg. A new scattering feature, the slosh, is introduced, with scattering behavior that resembles the temporal and polarization dependence of observed low angle returns from calm water. The plume model of scattering from breaking waves (from earlier work) is discussed as a source of high-intensity Sea Spikes. It is emphasized that the prediction of low angle scattering from the sea will require considerably more information about the shape, size, and distribution of the actual scattering features.
NASA Astrophysics Data System (ADS)
Preissler, Natalie; Bierwagen, Oliver; Ramu, Ashok T.; Speck, James S.
2013-08-01
A comprehensive study of the room-temperature electrical and electrothermal transport of single-crystalline indium oxide (In2O3) and indium tin oxide (ITO) films over a wide range of electron concentrations is reported. We measured the room-temperature Hall mobility μH and Seebeck coefficient S of unintentionally doped and Sn-doped high-quality, plasma-assisted molecular-beam-epitaxy-grown In2O3 for volume Hall electron concentrations nH from 7×1016 cm-3 (unintentionally doped) to 1×1021 cm-3 (highly Sn-doped, ITO). The resulting empirical S(nH) relation can be directly used in other In2O3 samples to estimate the volume electron concentration from simple Seebeck coefficient measurements. The mobility and Seebeck coefficient were modeled by a numerical solution of the Boltzmann transport equation. Ionized impurity scattering and polar optical phonon scattering were found to be the dominant scattering mechanisms. Acoustic phonon scattering was found to be negligible. Fitting the temperature-dependent mobility above room temperature of an In2O3 film with high mobility allowed us to find the effective Debye temperature (ΘD=700 K) and number of phonon modes (NOPML=1.33) that best describe the polar optical phonon scattering. The modeling also yielded the Hall scattering factor rH as a function of electron concentration, which is not negligible (rH≈1.4) at nondegenerate electron concentrations. Fitting the Hall-scattering-factor corrected concentration-dependent Seebeck coefficient S(n) for nondegenerate samples to the numerical solution of the Boltzmann transport equation and to widely used, simplified equations allowed us to extract an effective electron mass of m*=(0.30±0.03)me (with free electron mass me). The modeled mobility and Seebeck coefficient based on polar optical phonon and ionized impurity scattering describes the experimental results very accurately up to electron concentrations of 1019 cm-3, and qualitatively explains a mobility plateau or local maximum around 1020 cm-3. Ionized impurity scattering with doubly charged donors best describes the mobility in our unintentionally doped films, consistent with oxygen vacancies as unintentional shallow donors, whereas singly charged donors best describe our Sn-doped films. Our modeling yields a (phonon-limited) maximum theoretical drift mobility and Hall mobility of μ=190 cm2/Vs and μH=270 cm2/Vs, respectively. Simplified equations for the Seebeck coefficient describe the measured values in the nondegenerate regime using a Seebeck scattering parameter of r=-0.55 (which is consistent with the determined Debye temperature), and provide an estimate of the Seebeck coefficient to lower electron concentrations. The simplified equations fail to describe the Seebeck coefficient around the Mott transition (nMott=5.5×1018 cm-3) from nondegenerate to degenerate electron concentrations, whereas the numerical modeling accurately describes this region.
Spillover Compensation in the Presence of Respiratory Motion Embedded in SPECT Perfusion Data
NASA Astrophysics Data System (ADS)
Pretorius, P. Hendrik; King, Michael A.
2008-02-01
Spillover from adjacent significant accumulations of extra-cardiac activity decreases diagnostic accuracy of SPECT perfusion imaging in especially the inferior/septal cardiac region. One method of compensating for the spillover at some location outside of a structure is to estimate it as the counts blurred into this location when a template (3D model) of the structure undergoes simulated imaging followed by reconstruction. The objective of this study was to determine what impact uncorrected respiratory motion has on such spillover compensation of extra-cardiac activity in the right coronary artery (RCA) territory, and if it is possible to use manual segmentation to define the extra-cardiac activity template(s) used in spillover correction. Two separate MCAT phantoms (1283 matrices) were simulated to represent the source and attenuation distributions of patients with and without respiratory motion. For each phantom the heart was modeled: 1) with a normal perfusion pattern and 2) with an RCA defect equal to 50% of the normal myocardium count level. After Monte Carlo simulation of 64times64times120 projections with appropriate noise, data were reconstructed using the rescaled block iterative (RBI) algorithm with 30 subsets and 5 iterations with compensation for attenuation, scatter and resolution. A 3D Gaussian post-filter with a sigma of 0.476 cm was used to suppress noise. Manual segmentation of the liver in filtered emission slices was used to create 3D binary templates. The true liver distribution (with and without respiratory motion included) was also used as binary templates. These templates were projected using a ray-driven projector simulating the imaging system with the exclusion of Compton scatter and reconstructed using the same protocol as for the emission data, excluding scatter compensation. Reconstructed templates were scaled using reconstructed emission count levels from the liver, and spillover subtracted outside the template. It was evident from the polar maps that the manually segmented template reconstructions were unable to remove all the spillover originating in the liver from the inferior wall. This was especially noticeable when a perfusion defect is present. Templates based on the true liver distribution appreciably improved spillover correction. Thus the emerging combined SPECT/CT technology may play a vital role in identifying and segmenting extra-cardiac structures more reliably thereby facilitating spillover correction. This study also indicates that compensation for respiratory motion might play an important role in spillover compensation.
Standardizing Type Ia supernovae optical brightness using near-infrared rebrightening time
NASA Astrophysics Data System (ADS)
Shariff, H.; Dhawan, S.; Jiao, X.; Leibundgut, B.; Trotta, R.; van Dyk, D. A.
2016-12-01
Accurate standardization of Type Ia supernovae (SNIa) is instrumental to the usage of SNIa as distance indicators. We analyse a homogeneous sample of 22 low-z SNIa, observed by the Carnegie Supernova Project in the optical and near-infrared (NIR). We study the time of the second peak in the J band, t2, as an alternative standardization parameter of SNIa peak optical brightness, as measured by the standard SALT2 parameter mB. We use BAHAMAS, a Bayesian hierarchical model for SNIa cosmology, to estimate the residual scatter in the Hubble diagram. We find that in the absence of a colour correction, t2 is a better standardization parameter compared to stretch: t2 has a 1σ posterior interval for the Hubble residual scatter of σΔμ = {0.250, 0.257} mag, compared to σΔμ = {0.280, 0.287} mag when stretch (x1) alone is used. We demonstrate that when employed together with a colour correction, t2 and stretch lead to similar residual scatter. Using colour, stretch and t2 jointly as standardization parameters does not result in any further reduction in scatter, suggesting that t2 carries redundant information with respect to stretch and colour. With a much larger SNIa NIR sample at higher redshift in the future, t2 could be a useful quantity to perform robustness checks of the standardization procedure.
Scanning in situ Spectroscopy platform for imaging surgical breast tissue specimens
Krishnaswamy, Venkataramanan; Laughney, Ashley M.; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.
2013-01-01
A non-contact localized spectroscopic imaging platform has been developed and optimized to scan 1x1cm2 square regions of surgically resected breast tissue specimens with ~150-micron resolution. A color corrected, image-space telecentric scanning design maintained a consistent sampling geometry and uniform spot size across the entire imaging field. Theoretical modeling in ZEMAX allowed estimation of the spot size, which is equal at both the center and extreme positions of the field with ~5% variation across the designed waveband, indicating excellent color correction. The spot sizes at the center and an extreme field position were also measured experimentally using the standard knife-edge technique and were found to be within ~8% of the theoretical predictions. Highly localized sampling offered inherent insensitivity to variations in background absorption allowing direct imaging of local scattering parameters, which was validated using a matrix of varying concentrations of Intralipid and blood in phantoms. Four representative, pathologically distinct lumpectomy tissue specimens were imaged, capturing natural variations in tissue scattering response within a given pathology. Variations as high as 60% were observed in the average reflectance and relative scattering power images, which must be taken into account for robust classification performance. Despite this variation, the preliminary data indicates discernible scatter power contrast between the benign vs malignant groups, but reliable discrimination of pathologies within these groups would require investigation into additional contrast mechanisms. PMID:23389199
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
Poirier, B; Ville, J M; Maury, C; Kateb, D
2009-09-01
An analytical three dimensional bicylindrical model is developed in order to take into account the effects of the saddle-shaped area for the interface of a n-Herschel-Quincke tube system with the main duct. Results for the scattering matrix of this system deduced from this model are compared, in the plane wave frequency domain, versus experimental and numerical data and a one dimensional model with and without tube length correction. The results are performed with a two-Herschel-Quincke tube configuration having the same diameter as the main duct. In spite of strong assumptions on the acoustic continuity conditions at the interfaces, this model is shown to improve the nonperiodic amplitude variations and the frequency localization of the minima of the transmission and reflection coefficients with respect to one dimensional model with length correction and a three dimensional model.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Modeling of Mastcam Visible/Near-Infrared Spectrophotometric Observations at Yellowknife Bay
NASA Astrophysics Data System (ADS)
Johnson, J. R.; Bell, J. F., III; Hayes, A.; Liang, W.; Lemmon, M. T.; Grundy, W. M.; Deen, R. G.
2017-12-01
The Mastcam M-34 imaging system on the Curiosity rover has acquired multispectral images (445, 527, 751, 1012 nm) at multiple times of day at several locations along the traverse, sampling a variety of terrain types [1-4]. The light scattering properties of rocks and soils can be examined quantitatively using radiative transfer models with data extracted from these images [5-7], with the goals of providing information useful for understanding microphysical processes, atmospheric models, and orbital observations. Navcam stereo images also were acquired to compute surface normals and local incidence and emission angles. These can be combined with sky models to correct for diffuse reflectance on individual surface facets prior to photometric modeling. Here we model data sets acquired on Sols 171-184 while the rover was parked at the John Klein drill site in Yellowknife Bay [2]. Regions of interest were extracted from M-34 images on soils, rocks with variable dust cover, and rover tracks to provide data sets with sufficient phase angle coverage to allow Hapke radiative transfer modeling of each unit. Preliminary model results performed without the atmospheric correction showed rover tracks exhibited 1-term Henyey-Greenstein (HG) asymmetry parameter values consistent with more forward scattering surfaces compared to rocks. The 2-term HG scattering parameters (b and c) suggested that soils and dusty rocks were more backscattering than less dusty rocks, consistent with results from the MER Spirit site [5]. Preliminary single-scattering albedo values for soils varied from 0.38 (445 nm) to 0.87 (1012 nm); less dusty rocks varied from 0.59 (445 nm) to 0.81 (1012 nm). Macroscopic roughness values showed larger values for less dusty rocks (17-22˚). Opposition effect width (h) values implied higher porosity (or less uniform grain size distributions) in rocks than soils. Model results presented at the meeting will incorporate sky models. Future work will include additional Mastcam data sets. [1] Johnson, J., et al., LPSC, 44, abstract #1374, 2013;[2] Johnson, J., et al. 8th Int. Conf. Mars, #1073, 2014; [3] Johnson, J., et al., LPSC, #1424, 2015; [4] Johnson, J., et al., AGU, #P43B-2125 2015; [5] Johnson, J., et al., JGR, 111, E02S14, 2006; [6] Johnson, J., et al. JGR, 111, E12S16, 2006; [7] Johnson, J. et al., Icarus, 248, 25-71, 2015.
An analytically solvable three-body break-up model problem in hyperspherical coordinates
NASA Astrophysics Data System (ADS)
Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.
2012-10-01
An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.
A model for pion-pion scattering in large- N QCD
NASA Astrophysics Data System (ADS)
Veneziano, G.; Yankielowicz, S.; Onofri, E.
2017-04-01
Following up on recent work by Caron-Huot et al. we consider a generalization of the old Lovelace-Shapiro model as a toy model for ππ scattering satisfying (most of) the properties expected to hold in ('t Hooft's) large- N limit of massless QCD. In particular, the model has asymptotically linear and parallel Regge trajectories at positive t, a positive leading Regge intercept α 0 < 1, and an effective bending of the trajectories in the negative- t region producing a fixed branch point at J = 0 for t < t 0 < 0. Fixed (physical) angle scattering can be tuned to match the power-like behavior (including logarithmic corrections) predicted by perturbative QCD: A( s, t) ˜ s - β log( s)-γ F ( θ). Tree-level unitarity (i.e. positivity of residues for all values of s and J ) imposes strong constraints on the allowed region in the α0- β-γ parameter space, which nicely includes a physically interesting region around α 0 = 0 .5, β = 2 and γ = 3. The full consistency of the model would require an extension to multi-pion processes, a program we do not undertake in this paper.
Propagation of Gaussian wave packets in complex media and application to fracture characterization
NASA Astrophysics Data System (ADS)
Ding, Yinshuai; Zheng, Yingcai; Zhou, Hua-Wei; Howell, Michael; Hu, Hao; Zhang, Yu
2017-08-01
Knowledge of the subsurface fracture networks is critical in probing the tectonic stress states and flow of fluids in reservoirs containing fractures. We propose to characterize fractures using scattered seismic data, based on the theory of local plane-wave multiple scattering in a fractured medium. We construct a localized directional wave packet using point sources on the surface and propagate it toward the targeted subsurface fractures. The wave packet behaves as a local plane wave when interacting with the fractures. The interaction produces multiple scattering of the wave packet that eventually travels up to the surface receivers. The propagation direction and amplitude of the multiply scattered wave can be used to characterize fracture density, orientation and compliance. Two key aspects in this characterization process are the spatial localization and directionality of the wave packet. Here we first show the physical behaviour of a new localized wave, known as the Gaussian Wave Packet (GWP), by examining its analytical solution originally formulated for a homogenous medium. We then use a numerical finite-difference time-domain (FDTD) method to study its propagation behaviour in heterogeneous media. We find that a GWP can still be localized and directional in space even over a large propagation distance in heterogeneous media. We then propose a method to decompose the recorded seismic wavefield into GWPs based on the reverse-time concept. This method enables us to create a virtually recorded seismic data using field shot gathers, as if the source were an incident GWP. Finally, we demonstrate the feasibility of using GWPs for fracture characterization using three numerical examples. For a medium containing fractures, we can reliably invert for the local parameters of multiple fracture sets. Differing from conventional seismic imaging such as migration methods, our fracture characterization method is less sensitive to errors in the background velocity model. For a layered medium containing fractures, our method can correctly recover the fracture density even with an inaccurate velocity model.
NASA Technical Reports Server (NTRS)
Zhang, S. Nan; Zhang, Xiaoling; Wu, Xuebing; Yao, Yangsen; Sun, Xuejun; Xu, Haiguang; Cui, Wei; Chen, Wan; Harmon, B. A.; Robinson, C. R.
1999-01-01
The results of spectral modeling of the data for a series of RXTE observations and four ASCA observations of GRO J1655-40 are presented. The thermal Comptonization model is used instead of the power-law model for the hard component of the two-component continuum spectra. The previously reported dramatic variations of the apparent inner disk radius of GRO J1655-40 during its outburst may be due to the inverse Compton scattering in the hot corona. A procedure is developed for making the radiative transfer correction to the fitting parameters from RXTE data and a more stable inner disk radius is obtained. A practical process of determining the color correction (hardening) factor from observational data is proposed and applied to the four ASCA observations of GRO J1655-40. We found that the color correction factor may vary significantly between different observations and the finally corrected physical inner disk radius remains reasonably stable over a large range of luminosity and spectral states.
NASA Astrophysics Data System (ADS)
Zidane, Shems
This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The main inputs of this study were those normally used in an atmospheric correction : apparent at-sensor radiance and the aerosol optical depth (AOD) acquired using ground-based sunphotometry. The procedure we employed made use of a standard atmospheric correction code (CAM5S, for Canadian Modified 5S, which comes from the 5S radiative transfer model in the visible and near infrared) : however, we also used other parameters and data to adapt and correctly model the special atmospheric situation which affected the multi-altitude images acquired during the St. Jean field campaign. We then developed a modeling protocol for these atmospheric perturbations where auxiliary data was employed to complement our main data-set. This allowed for the development of a robust and simple methodology adapted to this atmospheric situation. The auxiliary data, i.e. meteorological data, LIDAR profiles, various satellite images and sun photometer retrievals of the scattering phase function, were sufficient to accurately model the observed plume in terms of a unusual, vertical distribution. This distribution was transformed into an aerosol optical depth profile that replaced the standard aerosol optical depth profile employed in the CAM5S atmospheric correction model. Based on this model, a comparison between the apparent ground reflectances obtained after atmospheric corrections and validation values of R*(0) obtained from the lowest altitude data showed that the error between the two was less than 0.01 rms. This correction was shown to be a significantly better estimation of the surface reflectance than that obtained using the atmospheric correction model. Significant differences were nevertheless observed in the non-standard solution : these were mainly caused by the difficulties brought about by the acquisition conditions, significant disparities attributable to inconsistencies in the co-sampling / co-registration of different targets from three different altitudes, and possibly modeling errors and / or calibration. There is accordingly room for improvement in our approach to dealing with such conditions. The modeling and forecasting of such a disturbance is explicitly described in this document: our goal in so doing is to permit the establishment of a better protocol for the acquisition of more suitable supporting data. The originality of this study stems from a new approach for incorporating a plume structure into an operational atmospheric correction model and then demonstrating that the approach was a significant improvement over an approach that ignored the perturbations in the vertical profile while employing the correct overall AOD. The profile model we employed was simple and robust but captured sufficient plume detail to achieve significant improvements in atmospheric correction accuracy. The overall process of addressing all the problems encountered in the analysis of our aerosol perturbation helped us to build an appropriate methodology for characterizing such events based on data availability, distributed freely and accessible to the scientific community. This makes our study adaptable and exportable to other types of non-standard atmospheric events. Keywords : non-standard atmospheric perturbation, multi-altitude apparent radiances, smoke plume, Gaussian plume modelization, radiance fit, AOD, CASI
Fortmann, Carsten; Wierling, August; Röpke, Gerd
2010-02-01
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
NASA Astrophysics Data System (ADS)
Voronovich, A. G.; Zavorotny, V. U.
2001-07-01
A small-slope approximation (SSA) is used for numerical calculations of a radar backscattering cross section of the ocean surface for both Ku- and C-bands for various wind speeds and incident angles. Both the lowest order of the SSA and the one that includes the next-order correction to it are considered. The calculations were made by assuming the surface-height spectrum of Elfouhaily et al for fully developed seas. Empirical scattering models CMOD2-I3 and SASS-II are used for comparison. Theoretical calculations are in good overall agreement with the experimental data represented by the empirical models, with the exception of HH-polarization in the upwind direction. It was assumed that steep breaking waves are responsible for this effect, and the probability density function of large slopes was calculated based on this assumption. The logarithm of this function in the upwind direction can be approximated by a linear combination of wind speed and the appropriate slope. The resulting backscattering cross section for upwind, downwind and cross-wind directions, for winds ranging between 5 and 15 m s-1, and for both polarizations in both wave bands corresponds to experimental results within 1-2 dB accuracy.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
Imaging model for the scintillator and its application to digital radiography image enhancement.
Wang, Qian; Zhu, Yining; Li, Hongwei
2015-12-28
Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.
NASA Astrophysics Data System (ADS)
Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu
2001-10-01
We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.
NASA Technical Reports Server (NTRS)
Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.;
2014-01-01
The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.
2016-05-01
The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Li, Y; Liu, B
Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less
NASA Astrophysics Data System (ADS)
Mangla, Rohit; Kumar, Shashi; Nandy, Subrata
2016-05-01
SAR and LiDAR remote sensing have already shown the potential of active sensors for forest parameter retrieval. SAR sensor in its fully polarimetric mode has an advantage to retrieve scattering property of different component of forest structure and LiDAR has the capability to measure structural information with very high accuracy. This study was focused on retrieval of forest aboveground biomass (AGB) using Terrestrial Laser Scanner (TLS) based point clouds and scattering property of forest vegetation obtained from decomposition modelling of RISAT-1 fully polarimetric SAR data. TLS data was acquired for 14 plots of Timli forest range, Uttarakhand, India. The forest area is dominated by Sal trees and random sampling with plot size of 0.1 ha (31.62m*31.62m) was adopted for TLS and field data collection. RISAT-1 data was processed to retrieve SAR data based variables and TLS point clouds based 3D imaging was done to retrieve LiDAR based variables. Surface scattering, double-bounce scattering, volume scattering, helix and wire scattering were the SAR based variables retrieved from polarimetric decomposition. Tree heights and stem diameters were used as LiDAR based variables retrieved from single tree vertical height and least square circle fit methods respectively. All the variables obtained for forest plots were used as an input in a machine learning based Random Forest Regression Model, which was developed in this study for forest AGB estimation. Modelled output for forest AGB showed reliable accuracy (RMSE = 27.68 t/ha) and a good coefficient of determination (0.63) was obtained through the linear regression between modelled AGB and field-estimated AGB. The sensitivity analysis showed that the model was more sensitive for the major contributed variables (stem diameter and volume scattering) and these variables were measured from two different remote sensing techniques. This study strongly recommends the integration of SAR and LiDAR data for forest AGB estimation.
Generalization of the Hartree-Fock approach to collision processes
NASA Astrophysics Data System (ADS)
Hahn, Yukap
1997-06-01
The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.
Spectral estimation for characterization of acoustic aberration.
Varslot, Trond; Angelsen, Bjørn; Waag, Robert C
2004-07-01
Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.
NASA Technical Reports Server (NTRS)
Joseph, Alicia T.; O'Neil, P. E.; vanderVelde, R.; Gish, T.
2008-01-01
A methodology is presented to correct backscatter (sigma(sup 0)) observations for the effect of vegetation. The proposed methodology is based on the concept that the ratio of the surface scattering over the total amount of scattering (sigma(sup 0)(sub soil)/sigma(sup 0)) is only affected by the vegetation and can be described as a function of the vegetation water content. Backscatter observations sigma(sup 0) from the soil are not influenced by vegetation. Under bare soil conditions (sigma(sup 0)(sub soil)/sigma(sup 0)) equals 1. Under low to moderate biomass and soil moisture conditions, vegetation affects the observed sigma(sup 0) through absorption of the surface scattering and contribution of direct scattering by the vegetation itself. Therefore, the contribution of the surface scattering is smaller than the observed total amount of scattering and decreases as the biomass increases. For dense canopies scattering interactions between the soil surface and vegetation elements (e.g. leaves and stems) also become significant. Because these higher order scattering mechanisms are influenced by the soil surface, an increase in (sigma(sup 0)(sub soil)/sigma(sup 0)) may be observed as the biomass increases under densely vegetated conditions. This methodology is applied within the framework of time series based approach for the retrieval of soil moisture. The data set used for this investigation has been collected during a campaign conducted at USDA's Optimizing Production Inputs for Economic and Environmental Enhancement OPE-3) experimental site in Beltsville, Maryland (USA). This campaign took place during the corn growth cycle from May 10th to 0ctober 2nd, 2002. In this period the corn crops reached a vegetation water content of 5.1 kg m(exp -2) at peak biomass and a soil moisture range varying between 0.00 to 0.26 cubic cm/cubic cm. One of the deployed microwave instruments operated was a multi-frequency (C-band (4.75 GHz) and L-band (1.6 GHz)) quad-polarized (HH, HV, VV, VH) radar which was mounted on a 20 meter long boom. In the OPE-3 field campaign, radar observations were collected once a week at nominal times of 8 am, 10 am, 12 noon and 2 pm. During each data run the radar acquired sixty independent measurements within an azimuth of 120 degrees from a boom height of 12.2 m and at three different incidence angles (15,35, and 55 degrees). The sixty observations were averaged to provide one backscatter value for the study area and its accuracy is estimated to be 51.0 dB. For this investigation the C-band observations have been used. Application of the proposed methodology to the selected data set showed a well-defined relationship between (sigma(sup 0)(sub soil)/sigma(sup 0)) and the vegetation water content. It is found that this relationship can be described with two experimentally determined parameters, which depend on the sensing configuration (e.g. incidence angle and polarization). Through application of the proposed vegetation correction methodology and the obtained parameterizations, the soil moisture retrieval accuracy within the framework of a time series based approach is improved from 0.033 to 0.032 cubic cm/cubic cm, from 0.049 to 0.033 cubic cm/cubic cm and from 0.079 to 0.047 cubic cm/cubic cm for incidence angles of 15,35 and 55 degrees, respectively. Improvement in soil moisture retrieval due to vegetation correction is greater at larger incidence angles (due to the increased path length and larger vegetation effects on the surface signal at the larger angles).
On the far-field computation of acoustic radiation forces.
Martin, P A
2017-10-01
It is known that the steady acoustic radiation force on a scatterer due to incident time-harmonic waves can be calculated by evaluating certain integrals of velocity potentials over a sphere surrounding the scatterer. The goal is to evaluate these integrals using far-field approximations and appropriate limits. Previous derivations are corrected, clarified, and generalized. Similar corrections are made to textbook derivations of optical theorems.