Sample records for scatter correction based

  1. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  2. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  3. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  4. Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong

    2017-03-01

    Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.

  5. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  6. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  7. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  8. Guide-star-based computational adaptive optics for broadband interferometric tomography

    PubMed Central

    Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.

    2012-01-01

    We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179

  9. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    NASA Astrophysics Data System (ADS)

    Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.

    1997-02-01

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  11. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  12. Characterization and correction of cupping effect artefacts in cone beam CT

    PubMed Central

    Hunter, AK; McDavid, WD

    2012-01-01

    Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754

  13. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    NASA Astrophysics Data System (ADS)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  14. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept.

    PubMed

    Lee, Ho; Fahimian, Benjamin P; Xing, Lei

    2017-03-21

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  15. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  16. ITERATIVE SCATTER CORRECTION FOR GRID-LESS BEDSIDE CHEST RADIOGRAPHY: PERFORMANCE FOR A CHEST PHANTOM.

    PubMed

    Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich

    2016-06-01

    The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098

  18. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia

    NASA Astrophysics Data System (ADS)

    Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  19. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less

  20. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  1. Library based x-ray scatter correction for dedicated cone beam breast CT

    PubMed Central

    Shi, Linxi; Karellas, Andrew; Zhu, Lei

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870

  2. WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Liu, T; Dong, X

    Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less

  3. A model-based scatter artifacts correction for cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Wei; Zhu, Jun; Wang, Luyao

    2016-04-15

    Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less

  4. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  5. SU-E-J-10: A Moving-Blocker-Based Strategy for Simultaneous Megavoltage and Kilovoltage Scatter Correction in Cone-Beam Computed Tomography Image Acquired During Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Lee, H; Wang, J

    2014-06-01

    Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less

  6. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  7. Library based x-ray scatter correction for dedicated cone beam breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less

  8. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods for /sup 201/Tl cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.

    1997-12-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  9. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  10. [Atmospheric correction of HJ-1 CCD data for water imagery based on dark object model].

    PubMed

    Zhou, Li-Guo; Ma, Wei-Chun; Gu, Wan-Hua; Huai, Hong-Yan

    2011-08-01

    The CCD multi-band data of HJ-1A has great potential in inland water quality monitoring, but the precision of atmospheric correction is a premise and necessary procedure for its application. In this paper, a method based on dark pixel for water-leaving radiance retrieving is proposed. Beside the Rayleigh scattering, the aerosol scattering is important to atmospheric correction, the water quality of inland lakes always are case II water and the value of water leaving radiance is not zero. So the synchronous MODIS shortwave infrared data was used to obtain the aerosol parameters, and in virtue of the characteristic that aerosol scattering is relative stabilized in 560 nm, the water-leaving radiance for each visible and near infrared band were retrieved and normalized, accordingly the remotely sensed reflectance of water was computed. The results show that the atmospheric correction method based on the imagery itself is more effective for the retrieval of water parameters for HJ-1A CCD data.

  11. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  12. SU-C-201-02: Quantitative Small-Animal SPECT Without Scatter Correction Using High-Purity Germanium Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, A; Peterson, T; Johnson, L

    2015-06-15

    Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMAmore » phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter correction, reducing uncertainties introduced by scatter correction algorithms. Funding provided by NIH/NIBIB grant R01EB013677; Todd Peterson, Ph.D., has had a research contract with PHDs Co., Knoxville, TN.« less

  13. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  14. Scatter characterization and correction for simultaneous multiple small-animal PET imaging.

    PubMed

    Prasad, Rameshwar; Zaidi, Habib

    2014-04-01

    The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.

  15. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.

    PubMed

    Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T

    2017-01-01

    Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.

  16. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI

    PubMed Central

    Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc

    2017-01-01

    Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656

  17. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  18. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  19. CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel

    2015-12-20

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less

  20. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  1. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  2. A study on scattering correction for γ-photon 3D imaging test method

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao

    2018-03-01

    A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.

  3. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  4. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    NASA Astrophysics Data System (ADS)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  5. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  6. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  7. Robust scatter correction method for cone-beam CT using an interlacing-slit plate

    NASA Astrophysics Data System (ADS)

    Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long

    2016-06-01

    Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)

  8. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  9. A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.

    PubMed

    Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R

    2016-10-01

    In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.

  10. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  11. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  12. Lidar inelastic multiple-scattering parameters of cirrus particle ensembles determined with geometrical-optics crystal phase functions.

    PubMed

    Reichardt, J; Hess, M; Macke, A

    2000-04-20

    Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.

  13. Image Reconstruction for a Partially Collimated Whole Body PET Scanner

    PubMed Central

    Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.

    2008-01-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731

  14. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  15. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  16. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  17. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    PubMed

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  18. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  19. A library least-squares approach for scatter correction in gamma-ray tomography

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  20. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  1. Feasibility study of direct spectra measurements for Thomson scattered signals for KSTAR fusion-grade plasmas

    NASA Astrophysics Data System (ADS)

    Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.

    2017-11-01

    Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.

  2. Development of online automatic detector of hydrocarbons and suspended organic matter by simultaneously acquisition of fluorescence and scattering.

    PubMed

    Mbaye, Moussa; Diaw, Pape Abdoulaye; Gaye-Saye, Diabou; Le Jeune, Bernard; Cavalin, Goulven; Denis, Lydie; Aaron, Jean-Jacques; Delmas, Roger; Giamarchi, Philippe

    2018-03-05

    Permanent online monitoring of water supply pollution by hydrocarbons is needed for various industrial plants, to serve as an alert when thresholds are exceeded. Fluorescence spectroscopy is a suitable technique for this purpose due to its sensitivity and moderate cost. However, fluorescence measurements can be disturbed by the presence of suspended organic matter, which induces beam scattering and absorption, leading to an underestimation of hydrocarbon content. To overcome this problem, we propose an original technique of fluorescence spectra correction, based on a measure of the excitation beam scattering caused by suspended organic matter on the left side of the Rayleigh scattering spectral line. This correction allowed us to obtain a statistically validated estimate of the naphthalene content (used as representative of the polyaromatic hydrocarbon contamination), regardless of the amount of suspended organic matter in the sample. Moreover, it thus becomes possible, based on this correction, to estimate the amount of suspended organic matter. By this approach, the online warning system remains operational even when suspended organic matter is present in the water supply. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  4. Improvement of scattering correction for in situ coastal and inland water absorption measurement using exponential fitting approach

    NASA Astrophysics Data System (ADS)

    Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan

    2017-10-01

    The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.

  5. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  6. SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, B; Liu, S; Zhang, T

    2016-06-15

    Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less

  7. [Development of a Striatal and Skull Phantom for Quantitative 123I-FP-CIT SPECT].

    PubMed

    Ishiguro, Masanobu; Uno, Masaki; Miyazaki, Takuma; Kataoka, Yumi; Toyama, Hiroshi; Ichihara, Takashi

    123 Iodine-labelled N-(3-fluoropropyl) -2β-carbomethoxy-3β-(4-iodophenyl) nortropane ( 123 I-FP-CIT) single photon emission computerized tomography (SPECT) images are used for differential diagnosis such as Parkinson's disease (PD). Specific binding ratio (SBR) is affected by scattering and attenuation in SPECT imaging, because gender and age lead to changes in skull density. It is necessary to clarify and correct the influence of the phantom simulating the the skull. The purpose of this study was to develop phantoms that can evaluate scattering and attenuation correction. Skull phantoms were prepared based on the measuring the results of the average computed tomography (CT) value, average skull thickness of 12 males and 16 females. 123 I-FP-CIT SPECT imaging of striatal phantom was performed with these skull phantoms, which reproduced normal and PD. SPECT images, were reconstructed with scattering and attenuation correction. SBR with partial volume effect corrected (SBR act ) and conventional SBR (SBR Bolt ) were measured and compared. The striatum and the skull phantoms along with 123 I-FP-CIT were able to reproduce the normal accumulation and disease state of PD and further those reproduced the influence of skull density on SPECT imaging. The error rate with the true SBR, SBR act was much smaller than SBR Bolt . The effect on SBR could be corrected by scattering and attenuation correction even if the skull density changes with 123 I-FP-CIT on SPECT imaging. The combination of triple energy window method and CT-attenuation correction method would be the best correction method for SBR act .

  8. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  9. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  10. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  11. Dispersive approach to two-photon exchange in elastic electron-proton scattering

    DOE PAGES

    Blunden, P. G.; Melnitchouk, W.

    2017-06-14

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  12. Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation

    PubMed Central

    Uribe-Patarroyo, Néstor; Bouma, Brett E.

    2015-01-01

    We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040

  13. Extracting the σ-term from low-energy pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Ruiz de Elvira, Jacobo; Hoferichter, Martin; Kubis, Bastian; Meißner, Ulf-G.

    2018-02-01

    We present an extraction of the pion-nucleon (π N) scattering lengths from low-energy π N scattering, by fitting a representation based on Roy-Steiner equations to the low-energy data base. We show that the resulting values confirm the scattering-length determination from pionic atoms, and discuss the stability of the fit results regarding electromagnetic corrections and experimental normalization uncertainties in detail. Our results provide further evidence for a large π N σ-term, {σ }π N=58(5) {{MeV}}, in agreement with, albeit less precise than, the determination from pionic atoms.

  14. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.

  15. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  16. Optimization of a simultaneous dual-isotope 201Tl/123I-MIBG myocardial SPECT imaging protocol with a CZT camera for trigger zone assessment after myocardial infarction for routine clinical settings: Are delayed acquisition and scatter correction necessary?

    PubMed

    D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis

    2017-08-01

    Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.

  17. A single-scattering correction for the seismo-acoustic parabolic equation.

    PubMed

    Collins, Michael D

    2012-04-01

    An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.

  18. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  19. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  20. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  1. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    NASA Astrophysics Data System (ADS)

    Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert

    2018-04-01

    Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  2. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  3. Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering

    NASA Astrophysics Data System (ADS)

    Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc

    2015-03-01

    Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.

  4. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  5. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  6. Atmospheric correction for inland water based on Gordon model

    NASA Astrophysics Data System (ADS)

    Li, Yunmei; Wang, Haijun; Huang, Jiazhu

    2008-04-01

    Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.

  7. Improved determination of particulate absorption from combined filter pad and PSICAM measurements.

    PubMed

    Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David

    2016-10-31

    Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.

  8. WE-DE-207B-10: Library-Based X-Ray Scatter Correction for Dedicated Cone-Beam Breast CT: Clinical Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  9. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  10. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  11. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    PubMed

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  12. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  13. Reversal of photon-scattering errors in atomic qubits.

    PubMed

    Akerman, N; Kotler, S; Glickman, Y; Ozeri, R

    2012-09-07

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  14. Analysis of position-dependent Compton scatter in scintimammography with mild compression

    NASA Astrophysics Data System (ADS)

    Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.

    2003-10-01

    In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.

  15. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    PubMed

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  16. High-fidelity artifact correction for cone-beam CT imaging of the brain

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

  17. Virtual Excitation and Multiple Scattering Correction Terms to the Neutron Index of Refraction for Hydrogen.

    PubMed

    Schoen, K; Snow, W M; Kaiser, H; Werner, S A

    2005-01-01

    The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.

  18. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  19. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, Y; Sharp, G

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less

  20. The MOSDEF Survey: Dissecting the Star Formation Rate versus Stellar Mass Relation Using Hα and Hβ Emission Lines at z ∼ 2

    NASA Astrophysics Data System (ADS)

    Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Kriek, Mariska; Siana, Brian; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan; Price, Sedona H.; de Groot, Laura; Azadi, Mojegan

    2015-12-01

    We present results on the star formation rate (SFR) versus stellar mass (M*) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M* (˜109.5-1011.5 M⊙). We find a correlation between log(SFR(Hα)) and log(M*) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find that different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)-log(M*) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))-log(M*) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))-log(M*). Based on comparisons to a simulated SFR-M* relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)- and SFR(UV)-M* relations to constrain the stochasticity of star formation in high-redshift galaxies.

  1. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  2. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  3. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  4. Soft-photon emission effects and radiative corrections for electromagnetic processes at very high energies

    NASA Technical Reports Server (NTRS)

    Gould, R. J.

    1979-01-01

    Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.

  5. Correction of autofluorescence intensity for epithelial scattering by optical coherence tomography: a phantom study

    NASA Astrophysics Data System (ADS)

    Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.

    2013-03-01

    In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.

  6. Scatter and cross-talk correction for one-day acquisition of 123I-BMIPP and 99mtc-tetrofosmin myocardial SPECT.

    PubMed

    Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo

    2004-12-01

    123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p < 0.05 was considered significant. The percentage of scatter and cross-talk relative to the corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p < 0.001). Scatter and cross-talk correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one-day acquisition of 123I-BMIPP and 99mTc-tetrofosmin SPECT.

  7. Intermediate energy proton-deuteron elastic scattering

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.

    1973-01-01

    A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.

  8. A Scattered Light Correction to Color Images Taken of Europa by the Galileo Spacecraft: Initial Results

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Valenti, M.

    2009-12-01

    Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.

  9. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  10. Coherent beam control through inhomogeneous media in multi-photon microscopy

    NASA Astrophysics Data System (ADS)

    Paudel, Hari Prasad

    Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.

  11. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  12. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  13. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  14. Infrared weak corrections to strongly interacting gauge boson scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-04-15

    We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.

  15. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  16. Measurement of light scattering in an urban area with a nephelometer and PM2.5 FDMS TEOM monitor: accounting for the effect of water.

    PubMed

    Cropper, Paul M; Hansen, Jaron C; Eatough, Delbert J

    2013-09-01

    The US. Environmental Protection Agency (EPA) has proposed a new secondary standard based on visibility in urban areas. The proposed standard will be based on light extinction, calculated from 24-hr averaged measurements. It would be desirable to base the standard on a shorter averaging time to better represent human perception of visibility This could be accomplished by either an estimation of extinction from semicontinuous particulate matter (PM) data or direct measurement of scattering and absorption. To this end we have compared 1-hr measurements of fine plus coarse particulate scattering using a nephelometer along with an estimate of absorption from aethalometer measurements. The study took place in Lindon, UT, during February and March 2012. The nephelometer measurements were corrected for coarse particle scattering and compared to the Filter Dynamic Measurement System (FDMS) tapered element oscillating microbalance monitor (TEOM) PM2.5 measurements. The two measurements agreed with a mass scattering coefficient of 3.3 +/- 0.3 m2/g at relative humidity below 80%. However at higher humidity, the nephelometer gave higher scattering results due to water absorbed by ammonium nitrate and ammonium sulfate in the particles. This particle-associated water is not measured by the FDMS TEOM. The FDMS TEOM data could be corrected for this difference using appropriate IMPROVE protocols if the particle composition is known. However a better approach may be to use a particle measurement system that allows for semicontinuous measurements but also measures particle bound water Data are presented from a 2003 study in Rubidoux, CA, showing how this could be accomplished using a Grimm model 1100 aerosol spectrometer or comparable instrument.

  17. GIXSGUI : a MATLAB toolbox for grazing-incidence X-ray scattering data visualization and reduction, and indexing of buried three-dimensional periodic nanostructured films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang

    GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.

  18. New approach to CT pixel-based photon dose calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, J.W.; Henkelman, R.M.

    The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less

  19. Quantitative assessment of scatter correction techniques incorporated in next generation dual-source computed tomography

    NASA Astrophysics Data System (ADS)

    Mobberley, Sean David

    Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.

  20. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, G; Feng, Z; Yin, Y

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator.more » The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less

  1. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  2. Frequency-domain method for measuring spectral properties in multiple-scattering media: methemoglobin absorption spectrum in a tissuelike phantom

    NASA Astrophysics Data System (ADS)

    Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela

    1995-03-01

    We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute

  3. Correction method for influence of tissue scattering for sidestream dark-field oximetry using multicolor LEDs

    NASA Astrophysics Data System (ADS)

    Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki

    2016-12-01

    We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.

  4. Limitations on near-surface correction for multicomponent offset VSP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macbeth, C.; Li, X.Y.; Horne, S.

    1994-12-31

    Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less

  5. Simulating the influence of scatter and beam hardening in dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Lifton, J. J.; Carmignato, S.

    2017-10-01

    Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.

  6. Clinical Evaluation of 68Ga-PSMA-II and 68Ga-RM2 PET Images Reconstructed With an Improved Scatter Correction Algorithm.

    PubMed

    Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H

    2018-06-06

    Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.

  7. THE MOSDEF SURVEY: DISSECTING THE STAR FORMATION RATE VERSUS STELLAR MASS RELATION USING Hα AND Hβ EMISSION LINES AT z ∼ 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shivaei, Irene; Reddy, Naveen A.; Siana, Brian

    2015-12-20

    We present results on the star formation rate (SFR) versus stellar mass (M{sub *}) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M{sub *} (∼10{sup 9.5}–10{sup 11.5} M{sub ⊙}). We find a correlation between log(SFR(Hα)) and log(M{sub *}) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find thatmore » different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)–log(M{sub *}) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))–log(M{sub *}) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))–log(M{sub *}). Based on comparisons to a simulated SFR–M{sub *} relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)– and SFR(UV)–M{sub *} relations to constrain the stochasticity of star formation in high-redshift galaxies.« less

  8. Rayleigh Scattering.

    ERIC Educational Resources Information Center

    Young, Andrew T.

    1982-01-01

    The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)

  9. Parameterization of the Van Hove dynamic self-scattering law Ss(Q,omega)

    NASA Astrophysics Data System (ADS)

    Zetterstrom, P.

    In this paper we present a model of the Van Hove dynamic scattering law SME(Q, omega) based on the maximum entropy principle which is developed for the first time. The model is aimed to be used in the calculation of inelastic corrections to neutron diffraction data. The model is constrained by the first and second frequency moments and detailed balance, but can be expanded to an arbitrary number of frequency moments. The second moment can be varied by an effective temperature to account for the kinetic energy of the atoms. The results are compared with a diffusion model of the scattering law. Finally some calculations of the inelastic self-scattering for a time-of-flight diffractometer are presented. From this we show that the inelastic self-scattering is very sensitive to the details of the dynamic scattering law.

  10. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  11. Widefield fluorescence microscopy with sensor-based conjugate adaptive optics using oblique back illumination

    PubMed Central

    Li, Jiang; Bifano, Thomas G.; Mertz, Jerome

    2016-01-01

    Abstract. We describe a wavefront sensor strategy for the implementation of adaptive optics (AO) in microscope applications involving thick, scattering media. The strategy is based on the exploitation of multiple scattering to provide oblique back illumination of the wavefront-sensor focal plane, enabling a simple and direct measurement of the flux-density tilt angles caused by aberrations at this plane. Advantages of the sensor are that it provides a large measurement field of view (FOV) while requiring no guide star, making it particularly adapted to a type of AO called conjugate AO, which provides a large correction FOV in cases when sample-induced aberrations arise from a single dominant plane (e.g., the sample surface). We apply conjugate AO here to widefield (i.e., nonscanning) fluorescence microscopy for the first time and demonstrate dynamic wavefront correction in a closed-loop implementation. PMID:27653793

  12. Scatter correction for x-ray conebeam CT using one-dimensional primary modulation

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca

    2009-02-01

    Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.

  13. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  14. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  15. Scatter and beam hardening reduction in industrial computed tomography using photon counting detectors

    NASA Astrophysics Data System (ADS)

    Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael

    2018-07-01

    Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.

  16. Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals

    NASA Technical Reports Server (NTRS)

    Wang, Meng-Hua; King, Michael D.

    1997-01-01

    We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.

  17. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    NASA Astrophysics Data System (ADS)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  18. Rayleigh, Compton and K-shell radiative resonant Raman scattering in 83Bi for 88.034 keV γ-rays

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sharma, Veena; Mehta, D.; Singh, Nirmal

    2007-11-01

    The Rayleigh, Compton and K-shell radiative resonant Raman scattering cross-sections for the 88.034 keV γ-rays have been measured in the 83Bi (K-shell binding energy = 90.526 keV) element. The measurements have been performed at 130° scattering angle using reflection-mode geometrical arrangement involving the 109Cd radioisotope as photon source and an LEGe detector. Computer simulations were exercised to determine distributions of the incident and emission angles, which were further used in evaluation of the absorption corrections for the incident and emitted photons in the target. The measured cross-sections for the Rayleigh scattering are compared with the modified form-factors (MFs) corrected for the anomalous-scattering factors (ASFs) and the S-matrix calculations; and those for the Compton scattering are compared with the Klein-Nishina cross-sections corrected for the non-relativistic Hartree-Fock incoherent scattering function S(x, Z). The ratios of the measured KL2, KL3, KM and KN2,3 radiative resonant Raman scattering cross-sections are found to be in general agreement with those of the corresponding measured fluorescence transition probabilities.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, P; Timmins, R; Wells, R G

    Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less

  20. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    NASA Astrophysics Data System (ADS)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  1. Does Your Optical Particle Counter Measure What You Think it Does? Calibration and Refractive Index Correction Methods.

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas

    2013-04-01

    Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.

  2. GATE Simulations of Small Animal SPECT for Determination of Scatter Fraction as a Function of Object Size

    NASA Astrophysics Data System (ADS)

    Konik, Arda; Madsen, Mark T.; Sunderland, John J.

    2012-10-01

    In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.

  3. Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.

    PubMed

    Pourmoghaddas, Amir; Wells, R Glenn

    2016-11-01

    Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for 99m Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter projections (120 ± 6 keV ) were also extracted from the acquired list-mode SPECT data. Either APD or DEW scatter projections were subtracted from corresponding 140 keV measured projections and then reconstructed with AC (APD-SC and DEW-SC). Quantitative accuracy of the activity measured in the heart for the APD-SC and DEW-SC images was assessed against dose calibrator measurements. The difference between modeled and acquired projections was measured as the root-mean-squared-error (RMSE). APD-modeled projections for a clinical cardiac study were also evaluated. APD-modeled projections showed good agreement with SPECT measurements and had reduced noise compared to DEW scatter estimates. APD-SC reduced mean error in activity measurement compared to DEW-SC in images and the reduction was statistically significant where the scatter fraction (SF) was large (mean SF = 28.5%, T-test p = 0.007). APD-SC reduced measurement uncertainties as well; however, the difference was not found to be statistically significant (F-test p > 0.5). RMSE comparisons showed that elevated levels of scatter did not significantly contribute to a change in RMSE (p > 0.2). Model-based APD scatter estimation is feasible for dedicated cardiac SPECT scanners with pinhole collimators. APD-SC images performed better than DEW-SC images and improved the accuracy of activity measurement in high-scatter scenarios.

  4. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  5. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    NASA Astrophysics Data System (ADS)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  6. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    PubMed Central

    Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902

  7. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  8. Three-dimensional surface profile intensity correction for spatially modulated imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.

    2009-05-01

    We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.

  9. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  10. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  11. Including Delbrück scattering in GEANT4

    NASA Astrophysics Data System (ADS)

    Omer, Mohamed; Hajima, Ryoichi

    2017-08-01

    Elastic scattering of γ-rays is a significant interaction among γ-ray interactions with matter. Therefore, the planning of experiments involving measurements of γ-rays using Monte Carlo simulations usually includes elastic scattering. However, current simulation tools do not provide a complete picture of elastic scattering. The majority of these tools assume Rayleigh scattering is the primary contributor to elastic scattering and neglect other elastic scattering processes, such as nuclear Thomson and Delbrück scattering. Here, we develop a tabulation-based method to simulate elastic scattering in one of the most common open-source Monte Carlo simulation toolkits, GEANT4. We collectively include three processes, Rayleigh scattering, nuclear Thomson scattering, and Delbrück scattering. Our simulation more appropriately uses differential cross sections based on the second-order scattering matrix instead of current data, which are based on the form factor approximation. Moreover, the superposition of these processes is carefully taken into account emphasizing the complex nature of the scattering amplitudes. The simulation covers an energy range of 0.01 MeV ≤ E ≤ 3 MeV and all elements with atomic numbers of 1 ≤ Z ≤ 99. In addition, we validated our simulation by comparing the differential cross sections measured in earlier experiments with those extracted from the simulations. We find that the simulations are in good agreement with the experimental measurements. Differences between the experiments and the simulations are 21% for uranium, 24% for lead, 3% for tantalum, and 8% for cerium at 2.754 MeV. Coulomb corrections to the Delbrück amplitudes may account for the relatively large differences that appear at higher Z values.

  12. X-ray coherent scattering tomography of textured material (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhu, Zheyuan; Pang, Shuo

    2017-05-01

    Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.

  13. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  14. WE-EF-207-03: Design and Optimization of a CBCT Head Scanner for Detection of Acute Intracranial Hemorrhage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Sisniega, A; Zbijewski, W

    Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blunden, P. G.; Melnitchouk, W.

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  16. Fringes in FTIR spectroscopy revisited: understanding and modelling fringes in infrared spectroscopy of thin films.

    PubMed

    Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim

    2015-06-21

    The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.

  17. Physically-Based Models for the Reflection, Transmission and Subsurface Scattering of Light by Smooth and Rough Surfaces, with Applications to Realistic Image Synthesis

    NASA Astrophysics Data System (ADS)

    He, Xiao Dong

    This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.

  18. Measuring aberrations in the rat brain by coherence-gated wavefront sensing using a Linnik interferometer

    PubMed Central

    Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A. Claude; Gigan, Sylvain; Bourdieu, Laurent

    2012-01-01

    Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm. PMID:23082292

  19. Measuring aberrations in the rat brain by coherence-gated wavefront sensing using a Linnik interferometer.

    PubMed

    Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A Claude; Gigan, Sylvain; Bourdieu, Laurent

    2012-10-01

    Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm.

  20. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    PubMed

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  1. Titan's Surface Composition from Cassini VIMS Solar Occultation Observations

    NASA Astrophysics Data System (ADS)

    McCord, Thomas; Hayne, Paul; Sotin, Christophe

    2013-04-01

    Titan's surface is obscured by a thick absorbing and scattering atmosphere, allowing direct observation of the surface within only a few spectral win-dows in the near-infrared, complicating efforts to identify and map geologi-cally important materials using remote sensing IR spectroscopy. We there-fore investigate the atmosphere's infrared transmission with direct measure-ments using Titan's occultation of the Sun as well as Titan's reflectance measured at differing illumination and observation angles observed by Cas-sini's Visual and Infrared Mapping Spectrometer (VIMS). We use two im-portant spectral windows: the 2.7-2.8-mm "double window" and the broad 5-mm window. By estimating atmospheric attenuation within these windows, we seek an empirical correction factor that can be applied to VIMS meas-urements to estimate the true surface reflectance and map inferred composi-tional variations. Applying the empirical corrections, we correct the VIMS data for the viewing geometry-dependent atmospheric effects to derive the 5-µm reflectance and 2.8/2.7-µm reflectance ratio. We then compare the cor-rected reflectances to compounds proposed to exist on Titan's surface. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scattering in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scatter-ing in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. The narrow 2.75-mm absorption feature, dividing the window into two sub-windows, present in all on-planet measurements is not present in the occultation data, and its strength is reduced at the cloud tops, suggesting the responsible molecule is concentrated in the lower troposphere or on the sur-face. Our empirical correction to Titan's surface reflectance yields properties shifted closer to water ice for the majority of the low-to-mid latitude area covered by VIMS measurements. Four compositional units are defined and mapped on Titan's surface based on the positions of data clusters in 5-mm vs. 2.8/2.7-mm scatter plots; a simple ternary mixture of H2O, hydrocarbons and CO2 might explain the reflectance properties of these surface units. The vast equatorial "dune seas" are compositionally very homogeneous, perhaps suggesting transport and mixing of particles over very large distances and/or and very consistent formation process and source material. The composi-tional branch characterizing Tui Regio and Hotei Regio is consistent with a mixture of typical Titan hydrocarbons and CO2, or possibly methane/ethane; the concentration mechanism proposed is something similar to a terrestrial playa lake evaporate deposit, based on the fact that river channels are known to feed into at least Hotei Regio.

  2. Conjugate adaptive optics with remote focusing in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel

    2018-02-01

    The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.

  3. Vibronic coupling simulations for linear and nonlinear optical processes: Simulation results

    NASA Astrophysics Data System (ADS)

    Silverstein, Daniel W.; Jensen, Lasse

    2012-02-01

    A vibronic coupling model based on time-dependent wavepacket approach is applied to simulate linear optical processes, such as one-photon absorbance and resonance Raman scattering, and nonlinear optical processes, such as two-photon absorbance and resonance hyper-Raman scattering, on a series of small molecules. Simulations employing both the long-range corrected approach in density functional theory and coupled cluster are compared and also examined based on available experimental data. Although many of the small molecules are prone to anharmonicity in their potential energy surfaces, the harmonic approach performs adequately. A detailed discussion of the non-Condon effects is illustrated by the molecules presented in this work. Linear and nonlinear Raman scattering simulations allow for the quantification of interference between the Franck-Condon and Herzberg-Teller terms for different molecules.

  4. Interplay of threshold resummation and hadron mass corrections in deep inelastic processes

    DOE PAGES

    Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix

    2015-02-01

    We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less

  5. Correcting Velocity Dispersion Measurements for Inclination and Implications for the M-Sigma Relation

    NASA Astrophysics Data System (ADS)

    Bellovary, Jillian M.; Holley-Bockelmann, Kelly; Gultekin, Kayhan; Christensen, Charlotte; Governato, Fabio

    2015-01-01

    The relation between central black hole mass and stellar spheroid velocity dispersion (the M-Sigma relation) is one of the best-known correlations linking black holes and their host galaxies. However, there is a large amount of scatter at the low-mass end, indicating that the processes that relate black holes to lower-mass hosts are not straightforward. Some of this scatter can be explained by inclination effects; contamination from disk stars along the line of sight can artificially boost velocity dispersion measurements by 30%. Using state of the art simulations, we have developed a correction factor for inclination effects based on purely observational quantities. We present the results of applying these factors to observed samples of galaxies and discuss the effects on the M-Sigma relation.

  6. Elastic electron scattering from the DNA bases cytosine and thymine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colyer, C. J.; Bellm, S. M.; Lohmann, B.

    2011-10-15

    Cross-section data for electron scattering from biologically relevant molecules are important for the modeling of energy deposition in living tissue. Relative elastic differential cross sections have been measured for cytosine and thymine using the crossed-beam method. These measurements have been performed for six discrete electron energies between 60 and 500 eV and for detection angles between 15 deg. and 130 deg. Calculations have been performed via the screen-corrected additivity rule method and are in good agreement with the present experiment.

  7. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  8. Fully relativistic form factor for Thomson scattering.

    PubMed

    Palastro, J P; Ross, J S; Pollock, B; Divol, L; Froula, D H; Glenzer, S H

    2010-03-01

    We derive a fully relativistic form factor for Thomson scattering in unmagnetized plasmas valid to all orders in the normalized electron velocity, beta[over ]=v[over ]/c. The form factor is compared to a previously derived expression where the lowest order electron velocity, beta[over], corrections are included [J. Sheffield, (Academic Press, New York, 1975)]. The beta[over ] expansion approach is sufficient for electrostatic waves with small phase velocities such as ion-acoustic waves, but for electron-plasma waves the phase velocities can be near luminal. At high phase velocities, the electron motion acquires relativistic corrections including effective electron mass, relative motion of the electrons and electromagnetic wave, and polarization rotation. These relativistic corrections alter the scattered emission of thermal plasma waves, which manifest as changes in both the peak power and width of the observed Thomson-scattered spectra.

  9. Airborne Aerosol in Situ Measurements during TCAP: A Closure Study of Total Scattering

    DOE PAGES

    Kassianov, Evgueni I.; Berg, Larry K.; Pekour, Mikhail S.; ...

    2015-07-31

    We present here a framework for calculating the total scattering of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. The synergistically employed aircraft data involve aerosol microphysical, chemical, and optical components and ambient relative humidity measurements. Our framework is developed emphasizing the explicit use of the complementary chemical composition data for estimating the complex refractive index (RI) of particles, and thus obtaining improved ambient size spectra derived from Optical Particle Counter (OPC) measurements. The feasibility of our framework for improved calculations of total aerosol scattering is demonstrated for different ambient conditions with a wide range of relativemore » humidities (from 5 to 80%) using three types of data collected by the U.S. Department of Energy (DOE) G-1 aircraft during the recent Two-Column Aerosol Project (TCAP). Namely, these three types of data employed are: (1) size distributions measured by an Ultra High Sensitivity Aerosol Spectrometer (UHSAS; 0.06-1 µm), a Passive Cavity Aerosol Spectrometer (PCASP; 0.1-3 µm) and a Cloud and Aerosol Spectrometer (CAS; 0.6- >10 µm), (2) chemical composition data measured by an Aerosol Mass Spectrometer (AMS; 0.06-0.6 µm) and a Single Particle Soot Photometer (SP2; 0.06-0.6 µm), and (3) the dry total scattering coefficient measured by a TSI integrating nephelometer at three wavelengths (0.45, 0.55, 0.7 µm) and scattering enhancement factor measured with a humidification system at three RHs (near 45%, 65% and 90%) at a single wavelength (0.525 µm). We demonstrate that good agreement (~10% on average) between the observed and calculated scattering at these three wavelengths can be obtained using the best available chemical composition data for the RI-based correction of the OPC-derived size spectra. We also demonstrate that ignoring the RI-based correction and using non-representative RI values can cause a substantial underestimation (~40% on average) and overestimation (~35% on average) of the calculated total scattering, respectively.« less

  10. Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.

    PubMed

    Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein

    2018-02-01

    A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A new approach to correct for absorbing aerosols in OMI UV

    NASA Astrophysics Data System (ADS)

    Arola, A.; Kazadzis, S.; Lindfors, A.; Krotkov, N.; Kujanpää, J.; Tamminen, J.; Bais, A.; di Sarra, A.; Villaplana, J. M.; Brogniez, C.; Siani, A. M.; Janouch, M.; Weihs, P.; Webb, A.; Koskela, T.; Kouremeti, N.; Meloni, D.; Buchard, V.; Auriol, F.; Ialongo, I.; Staneck, M.; Simic, S.; Smedley, A.; Kinne, S.

    2009-11-01

    Several validation studies of surface UV irradiance based on the Ozone Monitoring Instrument (OMI) satellite data have shown a high correlation with ground-based measurements but a positive bias in many locations. The main part of the bias can be attributed to the boundary layer aerosol absorption that is not accounted for in the current satellite UV algorithms. To correct for this shortfall, a post-correction procedure was applied, based on global climatological fields of aerosol absorption optical depth. These fields were obtained by using global aerosol optical depth and aerosol single scattering albedo data assembled by combining global aerosol model data and ground-based aerosol measurements from AERONET. The resulting improvements in the satellite-based surface UV irradiance were evaluated by comparing satellite and ground-based spectral irradiances at various European UV monitoring sites. The results generally showed a significantly reduced bias by 5-20%, a lower variability, and an unchanged, high correlation coefficient.

  12. [Primary Study on Predicting the Termination of Paroxysmal Atrial Fibrillation Based on a Novel RdR RR Intervals Scatter Plot].

    PubMed

    Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia

    2015-08-01

    Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.

  13. Radiative corrections to elastic proton-electron scattering measured in coincidence

    NASA Astrophysics Data System (ADS)

    Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.

    2017-05-01

    The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.

  14. Reducing uncertainties associated with filter-based optical measurements of light absorbing carbon particles with chemical information

    NASA Astrophysics Data System (ADS)

    Engström, J. E.; Leck, C.

    2011-08-01

    The presented filter-based optical method for determination of soot (light absorbing carbon or Black Carbon, BC) can be implemented in the field under primitive conditions and at low cost. This enables researchers with small economical means to perform monitoring at remote locations, especially in the Asia where it is much needed. One concern when applying filter-based optical measurements of BC is that they suffer from systematic errors due to the light scattering of non-absorbing particles co-deposited on the filter, such as inorganic salts and mineral dust. In addition to an optical correction of the non-absorbing material this study provides a protocol for correction of light scattering based on the chemical quantification of the material, which is a novelty. A newly designed photometer was implemented to measure light transmission on particle accumulating filters, which includes an additional sensor recording backscattered light. The choice of polycarbonate membrane filters avoided high chemical blank values and reduced errors associated with length of the light path through the filter. Two protocols for corrections were applied to aerosol samples collected at the Maldives Climate Observatory Hanimaadhoo during episodes with either continentally influenced air from the Indian/Arabian subcontinents (winter season) or pristine air from the Southern Indian Ocean (summer monsoon). The two ways of correction (optical and chemical) lowered the particle light absorption of BC by 63 to 61 %, respectively, for data from the Arabian Sea sourced group, resulting in median BC absorption coefficients of 4.2 and 3.5 Mm-1. Corresponding values for the South Indian Ocean data were 69 and 97 % (0.38 and 0.02 Mm-1). A comparison with other studies in the area indicated an overestimation of their BC levels, by up to two orders of magnitude. This raises the necessity for chemical correction protocols on optical filter-based determinations of BC, before even the sign on the radiative forcing based on their effects can be assessed.

  15. Post-PRK corneal scatter measurements with a scanning confocal slit photon counter

    NASA Astrophysics Data System (ADS)

    Taboada, John; Gaines, David; Perez, Mary A.; Waller, Steve G.; Ivan, Douglas J.; Baldwin, J. Bruce; LoRusso, Frank; Tutt, Ronald C.; Perez, Jose; Tredici, Thomas; Johnson, Dan A.

    2000-06-01

    Increased corneal light scatter or 'haze' has been associated with excimer laser photorefractive surgery of the cornea. The increased scatter can affect visual performance; however, topical steroid treatment post surgery substantially reduces the post PRK scatter. For the treatment and monitoring of the scattering characteristics of the cornea, various methods have been developed to objectively measure the magnitude of the scatter. These methods generally can measure scatter associated with clinically observable levels of haze. For patients with moderate to low PRK corrections receiving steroid treatment, measurement becomes fairly difficult as the haze clinical rating is non observable. The goal of this development was to realize an objective, non-invasive physical measurement that could produce a significant reading for any level including the background present in a normal cornea. As back-scatter is the only readily accessible observable, the instrument is based on this measurement. To achieve this end required the use of a confocal method to bias out the background light that would normally confound conventional methods. A number of subjects with nominal refractive errors in an Air Force study have undergone PRK surgery. A measurable increase in corneal scatter has been observed in these subjects whereas clinical ratings of the haze were noted as level zero. Other favorable aspects of this back-scatter based instrument include an optical capability to perform what is equivalent to an optical A-scan of the anterior chamber. Lens scatter can also be measured.

  16. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  17. Characterization of scatter in digital mammography from use of Monte Carlo simulations and comparison to physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-11-01

    Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less

  18. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  19. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  20. Quantum dynamical simulation of the scattering of Ar from a frozen LiF(100) surface based on a first principles interaction potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuri, Asaf; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il

    2015-07-07

    In-plane two and three dimensional diffraction patterns are computed for the vertical scattering of an Ar atom from a frozen LiF(100) surface. Suitable collimation of the incoming wavepacket serves to reveal the quantum mechanical diffraction. The interaction potential is based on a fit to an ab initio potential calculated using density functional theory with dispersion corrections. Due to the potential coupling found between the two horizontal surface directions, there are noticeable differences between the quantum angular distributions computed for two and three dimensional scattering. The quantum results are compared to analogous classical Wigner computations on the same surface and withmore » the same conditions. The classical dynamics largely provides the envelope for the quantum diffractive scattering. The classical results also show that the corrugation along the [110] direction of the surface is smaller than along the [100] direction, in qualitative agreement with experimental observations of unimodal and bimodal scattering for the [110] and [100] directions, respectively.« less

  1. SU-D-12A-07: Optimization of a Moving Blocker System for Cone-Beam Computed Tomography Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Yan, H; Jia, X

    2014-06-01

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less

  2. Effect of void shape in Czochralski-Si wafers on the intensity of laser-scattering

    NASA Astrophysics Data System (ADS)

    Takahashi, J.; Kawakami, K.; Nakai, K.

    2001-06-01

    The shape effect of anisotropic-shaped microvoid defects in Czochralski-grown silicon wafers on the intensity of laser scattering has been investigated. The size and shape of the defects were examined by means of transmission electron microscopy. Octahedral voids in conventional (nitrogen-undoped) wafers showed an almost isotropic scattering property under the incident condition of a p-polarization beam. On the other hand, parallelepiped-plate-shaped voids in nitrogen-doped wafers showed an anisotropic scattering property on both p- and s-polarized components of scattered light, depending strongly on the incident laser direction. The measured results were explained not by scattering calculation using Born approximation but by calculation based on Rayleigh scattering. It was found that the s component is explained by an inclination of a dipole moment induced on a defect from the scattering plane. Furthermore, using numerical electromagnetic analysis it was shown that the asymmetric behavior of the s component on the parallelepiped-plate voids is ascribed to the parallelepiped shape effect. These results suggest that correction of the scattering intensity is necessary to evaluate the size and volume of anisotropic-shaped defects from the scattered intensity.

  3. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  4. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    PubMed

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  5. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten

    2016-08-01

    A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.

  6. Event-Based Processing of Neutron Scattering Data

    DOE PAGES

    Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...

    2015-09-16

    Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less

  7. Adaptive handling of Rayleigh and Raman scatter of fluorescence data based on evaluation of the degree of spectral overlap

    NASA Astrophysics Data System (ADS)

    Hu, Yingtian; Liu, Chao; Wang, Xiaoping; Zhao, Dongdong

    2018-06-01

    At present the general scatter handling methods are unsatisfactory when scatter and fluorescence seriously overlap in excitation emission matrix. In this study, an adaptive method for scatter handling of fluorescence data is proposed. Firstly, the Raman scatter was corrected by subtracting the baseline of deionized water which was collected in each experiment to adapt to the intensity fluctuations. Then, the degrees of spectral overlap between Rayleigh scatter and fluorescence were classified into three categories based on the distance between the spectral peaks. The corresponding algorithms, including setting to zero, fitting on single or both sides, were implemented after the evaluation of the degree of overlap for individual emission spectra. The proposed method minimized the number of fitting and interpolation processes, which reduced complexity, saved time, avoided overfitting, and most importantly assured the authenticity of data. Furthermore, the effectiveness of this procedure on the subsequent PARAFAC analysis was assessed and compared to Delaunay interpolation by conducting experiments with four typical organic chemicals and real water samples. Using this method, we conducted long-term monitoring of tap water and river water near a dyeing and printing plant. This method can be used for improving adaptability and accuracy in the scatter handling of fluorescence data.

  8. Atmospheric correction of short-wave hyperspectral imagery using a fast, full-scattering 1DVar retrieval scheme

    NASA Astrophysics Data System (ADS)

    Thelen, J.-C.; Havemann, S.; Taylor, J. P.

    2012-06-01

    Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS) or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging spectrometer operated by the NASA Jet Propulsion Laboratory.

  9. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  10. Nonlinear scattering of ultrashort laser pulses on two-level system

    NASA Astrophysics Data System (ADS)

    Astapenko, Valery A.; Sakhno, Sergey V.

    2015-05-01

    The presentation is devoted to the theoretical investigation of nonlinear scattering of ultrashort electromagnetic pulses (USP) on two-level quantum system. We consider the scattering of several types of USP, namely, so called corrected Gaussian pulse (CGP) and cosine wavelet pulse. Such pulses have no constant component in their spectrum in contrast with traditional Gaussian pulse. It should be noted that the presence of constant component in the limit of ultrashort pulse durations leads to unphysical results. The main purpose of the present work is the investigation of the change of pulse temporal shape after scattering as a function of initial phase at different distances from the target. Numerical calculations are based on the solution of Bloch equations and expression for scattering field strength via dipole moment of two-level system exposed by the action of incident USP. In our calculation we also account for the influence of refracting index of the air on electric field strength in the pulse after scattering.

  11. Inverse Compton Scattering in Mildly Relativistic Plasma

    NASA Technical Reports Server (NTRS)

    Molnar, S. M.; Birkinshaw, M.

    1998-01-01

    We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters (k(sub B)T(sub e) greater than or approximately equal to 15 keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.

  12. Non-cancellation of electroweak logarithms in high-energy scattering

    DOE PAGES

    Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...

    2015-01-01

    We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.

  13. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, X; Zhang, Z; Xie, Y

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less

  14. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  15. Quadratic electroweak corrections for polarized Moller scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov

    2012-01-01

    The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.

  16. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  17. Generalized model screening potentials for Fermi-Dirac plasmas

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2016-04-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  18. Quark-hadron duality constraints on $$\\gamma Z$$ box corrections to parity-violating elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Nathan L.; Blunden, Peter G.; Melnitchouk, Wally

    2015-12-08

    We examine the interference \\gamma Z box corrections to parity-violating elastic electron--proton scattering in the light of the recent observation of quark-hadron duality in parity-violating deep-inelastic scattering from the deuteron, and the approximate isospin independence of duality in the electromagnetic nucleon structure functions down to Q 2 \\approx 1 GeV 2. Assuming that a similar behavior also holds for the \\gamma Z proton structure functions, we find that duality constrains the γ Z box correction to the proton's weak charge to be Re V γ Z V = (5.4 \\pm 0.4) \\times 10 -3 at the kinematics of the Qmore » weak experiment. Within the same model we also provide estimates of the γ Z corrections for future parity-violating experiments, such as MOLLER at Jefferson Lab and MESA at Mainz.« less

  19. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less

  1. Retrieval of background surface reflectance with BRD components from pre-running BRDF

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo

    2016-10-01

    Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.

  2. Novel auto-correction method in a fiber-optic distributed-temperature sensor using reflected anti-Stokes Raman scattering.

    PubMed

    Hwang, Dusun; Yoon, Dong-Jin; Kwon, Il-Bum; Seo, Dae-Cheol; Chung, Youngjoo

    2010-05-10

    A novel method for auto-correction of fiber optic distributed temperature sensor using anti-Stokes Raman back-scattering and its reflected signal is presented. This method processes two parts of measured signal. One part is the normal back scattered anti-Stokes signal and the other part is the reflected signal which eliminate not only the effect of local losses due to the micro-bending or damages on fiber but also the differential attenuation. Because the beams of the same wavelength are used to cancel out the local variance in transmission medium there is no differential attenuation inherently. The auto correction concept was verified by the bending experiment on different bending points. (c) 2010 Optical Society of America.

  3. Fits of weak annihilation and hard spectator scattering corrections in B u,d \\wideoverrightarrow VV decays

    NASA Astrophysics Data System (ADS)

    Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling

    2016-10-01

    In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\

  4. Asymmetric dark matter and CP violating scatterings in a UV complete model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldes, Iason; Bell, Nicole F.; Millar, Alexander J.

    We explore possible asymmetric dark matter models using CP violating scatterings to generate an asymmetry. In particular, we introduce a new model, based on DM fields coupling to the SM Higgs and lepton doublets, a neutrino portal, and explore its UV completions. We study the CP violation and asymmetry formation of this model, to demonstrate that it is capable of producing the correct abundance of dark matter and the observed matter-antimatter asymmetry. Crucial to achieving this is the introduction of interactions which violate CP with a T{sup 2} dependence.

  5. Coherence and diffraction limited resolution in microscopic OCT by a unified approach for the correction of dispersion and aberrations

    NASA Astrophysics Data System (ADS)

    Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.

    2018-03-01

    Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.

  6. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  7. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    PubMed

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  8. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  9. Ambient dose equivalent and effective dose from scattered x-ray spectra in mammography for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations.

    PubMed

    Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S

    2006-04-21

    In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.

  10. Operational atmospheric correction of AVHRR visible and infrared data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermote, E.; El Saleous, N.; Roger, J.C.

    1995-12-31

    The satellite level radiance is affected by the presence of the atmosphere between the sensor and the target. The ozone and water vapor absorption bands affect the signal recorded by the AVHRR visible and near infrared channels respectively. The Rayleigh scattering mainly affects the visible channel and is more pronounced when dealing with small sun elevations and large view angles. The aerosol scattering affects both channels and is certainly the most challenging term for atmospheric correction because of the spatial and temporal variability of both the type and amount of particles in the atmosphere. This paper presents the equation ofmore » the satellite signal, the scheme to retrieve atmospheric properties and corrections applied to AVHRR observations. The operational process uses TOMS data and a digital elevation model to correct for ozone absorption and rayleigh scattering. The water vapor content is evaluated using the split-window technique that is validated over ocean using 1988 SSM/I data. The aerosol amount retrieval over Ocean is achieved in channels 1 and 2 and compared to sun photometer observations to check consistency of the radiative transfer model and the sensor calibration. Over land, the method developed uses reflectance at 3.75 microns to deduce target reflectance in channel 1 and retrieve aerosol optical thickness that can be extrapolated in channel 2. The method to invert the reflectance at 3.75 microns is based on MODTRAN simulations and is validated by comparison to measurements performed during FIFE 87. Finally, aerosol optical thickness retrieved over Brazil and Eastern US is compared to sun photometer measurements.« less

  11. Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.

    NASA Technical Reports Server (NTRS)

    Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.

    1972-01-01

    The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.

  12. Laplace Transform Based Radiative Transfer Studies

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.

    2006-12-01

    Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.

  13. DISSOLVED ORGANIC FLUOROPHORES IN SOUTHEASTERN US COASTAL WATERS: CORRECTION METHOD FOR ELIMINATING RAYLEIGH AND RAMAN SCATTERING PEAKS IN EXCITATION-EMISSION MATRICES

    EPA Science Inventory

    Fluorescence-based observations provide useful, sensitive information concerning the nature and distribution of colored dissolved organic matter (CDOM) in coastal and freshwater environments. The excitation-emission matrix (EEM) technique has become widely used for evaluating sou...

  14. Inductively coupled plasma atomic fluorescence spectrometric determination of cadmium, copper, iron, lead, manganese and zinc

    USGS Publications Warehouse

    Sanzolone, R.F.

    1986-01-01

    An inductively coupled plasma atomic fluorescence spectrometric method is described for the determination of six elements in a variety of geological materials. Sixteen reference materials are analysed by this technique to demonstrate its use in geochemical exploration. Samples are decomposed with nitric, hydrofluoric and hydrochloric acids, and the residue dissolved in hydrochloric acid and diluted to volume. The elements are determined in two groups based on compatibility of instrument operating conditions and consideration of crustal abundance levels. Cadmium, Cu, Pb and Zn are determined as a group in the 50-ml sample solution under one set of instrument conditions with the use of scatter correction. Limitations of the scatter correction technique used with the fluorescence instrument are discussed. Iron and Mn are determined together using another set of instrumental conditions on a 1-50 dilution of the sample solution without the use of scatter correction. The ranges of concentration (??g g-1) of these elements in the sample that can be determined are: Cd, 0.3-500; Cu, 0.4-500; Fe, 85-250 000; Mn, 45-100 000; Pb, 5-10 000; and Zn, 0.4-300. The precision of the method is usually less than 5% relative standard deviation (RSD) over a wide concentration range and acceptable accuracy is shown by the agreement between values obtained and those recommended for the reference materials.

  15. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  16. Estimation of Soil Moisture with L-band Multi-polarization Radar

    NASA Technical Reports Server (NTRS)

    Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.

    2004-01-01

    Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.

  17. Spectral peculiarities of electromagnetic wave scattering by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2006-03-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  18. Spectral peculiarities of electromagnetic wave scattered by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2005-09-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  19. Influence of local-field corrections on Thomson scattering in collision-dominated two-component plasmas.

    PubMed

    Fortmann, Carsten; Wierling, August; Röpke, Gerd

    2010-02-01

    The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.

  20. Atmospheric scattering corrections to solar radiometry

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.

  1. Stopping power of dense plasmas: The collisional method and limitations of the dielectric formalism.

    PubMed

    Clauser, C F; Arista, N R

    2018-02-01

    We present a study of the stopping power of plasmas using two main approaches: the collisional (scattering theory) and the dielectric formalisms. In the former case, we use a semiclassical method based on quantum scattering theory. In the latter case, we use the full description given by the extension of the Lindhard dielectric function for plasmas of all degeneracies. We compare these two theories and show that the dielectric formalism has limitations when it is used for slow heavy ions or atoms in dense plasmas. We present a study of these limitations and show the regimes where the dielectric formalism can be used, with appropriate corrections to include the usual quantum and classical limits. On the other hand, the semiclassical method shows the correct behavior for all plasma conditions and projectile velocity and charge. We consider different models for the ion charge distributions, including bare and dressed ions as well as neutral atoms.

  2. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    NASA Astrophysics Data System (ADS)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  3. Relativistic corrections to the multiple scattering effect on the Sunyaev-Zel'dovich effect in the isotropic approximation

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu

    2001-10-01

    We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.

  4. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  5. A Vegetation Correction Methodology for Time Series Based Soil Moisture Retrieval From C-band Radar Observations

    NASA Technical Reports Server (NTRS)

    Joseph, Alicia T.; O'Neil, P. E.; vanderVelde, R.; Gish, T.

    2008-01-01

    A methodology is presented to correct backscatter (sigma(sup 0)) observations for the effect of vegetation. The proposed methodology is based on the concept that the ratio of the surface scattering over the total amount of scattering (sigma(sup 0)(sub soil)/sigma(sup 0)) is only affected by the vegetation and can be described as a function of the vegetation water content. Backscatter observations sigma(sup 0) from the soil are not influenced by vegetation. Under bare soil conditions (sigma(sup 0)(sub soil)/sigma(sup 0)) equals 1. Under low to moderate biomass and soil moisture conditions, vegetation affects the observed sigma(sup 0) through absorption of the surface scattering and contribution of direct scattering by the vegetation itself. Therefore, the contribution of the surface scattering is smaller than the observed total amount of scattering and decreases as the biomass increases. For dense canopies scattering interactions between the soil surface and vegetation elements (e.g. leaves and stems) also become significant. Because these higher order scattering mechanisms are influenced by the soil surface, an increase in (sigma(sup 0)(sub soil)/sigma(sup 0)) may be observed as the biomass increases under densely vegetated conditions. This methodology is applied within the framework of time series based approach for the retrieval of soil moisture. The data set used for this investigation has been collected during a campaign conducted at USDA's Optimizing Production Inputs for Economic and Environmental Enhancement OPE-3) experimental site in Beltsville, Maryland (USA). This campaign took place during the corn growth cycle from May 10th to 0ctober 2nd, 2002. In this period the corn crops reached a vegetation water content of 5.1 kg m(exp -2) at peak biomass and a soil moisture range varying between 0.00 to 0.26 cubic cm/cubic cm. One of the deployed microwave instruments operated was a multi-frequency (C-band (4.75 GHz) and L-band (1.6 GHz)) quad-polarized (HH, HV, VV, VH) radar which was mounted on a 20 meter long boom. In the OPE-3 field campaign, radar observations were collected once a week at nominal times of 8 am, 10 am, 12 noon and 2 pm. During each data run the radar acquired sixty independent measurements within an azimuth of 120 degrees from a boom height of 12.2 m and at three different incidence angles (15,35, and 55 degrees). The sixty observations were averaged to provide one backscatter value for the study area and its accuracy is estimated to be 51.0 dB. For this investigation the C-band observations have been used. Application of the proposed methodology to the selected data set showed a well-defined relationship between (sigma(sup 0)(sub soil)/sigma(sup 0)) and the vegetation water content. It is found that this relationship can be described with two experimentally determined parameters, which depend on the sensing configuration (e.g. incidence angle and polarization). Through application of the proposed vegetation correction methodology and the obtained parameterizations, the soil moisture retrieval accuracy within the framework of a time series based approach is improved from 0.033 to 0.032 cubic cm/cubic cm, from 0.049 to 0.033 cubic cm/cubic cm and from 0.079 to 0.047 cubic cm/cubic cm for incidence angles of 15,35 and 55 degrees, respectively. Improvement in soil moisture retrieval due to vegetation correction is greater at larger incidence angles (due to the increased path length and larger vegetation effects on the surface signal at the larger angles).

  6. On the far-field computation of acoustic radiation forces.

    PubMed

    Martin, P A

    2017-10-01

    It is known that the steady acoustic radiation force on a scatterer due to incident time-harmonic waves can be calculated by evaluating certain integrals of velocity potentials over a sphere surrounding the scatterer. The goal is to evaluate these integrals using far-field approximations and appropriate limits. Previous derivations are corrected, clarified, and generalized. Similar corrections are made to textbook derivations of optical theorems.

  7. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  8. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  9. Laser-plasma interactions in magnetized environment

    NASA Astrophysics Data System (ADS)

    Shi, Yuan; Qin, Hong; Fisch, Nathaniel J.

    2018-05-01

    Propagation and scattering of lasers present new phenomena and applications when the plasma medium becomes strongly magnetized. With mega-Gauss magnetic fields, scattering of optical lasers already becomes manifestly anisotropic. Special angles exist where coherent laser scattering is either enhanced or suppressed, as we demonstrate using a cold-fluid model. Consequently, by aiming laser beams at special angles, one may be able to optimize laser-plasma coupling in magnetized implosion experiments. In addition, magnetized scattering can be exploited to improve the performance of plasma-based laser pulse amplifiers. Using the magnetic field as an extra control variable, it is possible to produce optical pulses of higher intensity, as well as compress UV and soft x-ray pulses beyond the reach of other methods. In even stronger giga-Gauss magnetic fields, laser-plasma interaction enters a relativistic-quantum regime. Using quantum electrodynamics, we compute a modified wave dispersion relation, which enables correct interpretation of Faraday rotation measurements of strong magnetic fields.

  10. Bistatic scattering from a cone frustum

    NASA Technical Reports Server (NTRS)

    Ebihara, W.; Marhefka, R. J.

    1986-01-01

    The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.

  11. Single-Inclusive Jet Production In Electron-Nucleon Collisions Through Next-To-Next-To-Leading Order In Perturbative QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abelof, Gabriel; Boughezal, Radja; Liu, Xiaohui

    2016-10-17

    We compute the Oσ 2σ 2 s perturbative corrections to inclusive jet production in electron-nucleon collisions. This process is of particular interest to the physics program of a future Electron Ion Collider (EIC). We include all relevant partonic processes, including deep-inelastic scattering contributions, photon-initiated corrections, and parton-parton scattering terms that first appear at this order. Upon integration over the final-state hadronic phase space we validate our results for the deep-inelastic corrections against the known next-to-next-to-leading order (NNLO) structure functions. Our calculation uses the N-jettiness subtraction scheme for performing higher-order computations, and allows for a completely differential description of the deep-inelasticmore » scattering process. We describe the application of this method to inclusive jet production in detail, and present phenomenological results for the proposed EIC. The NNLO corrections have a non-trivial dependence on the jet kinematics and arise from an intricate interplay between all contributing partonic channels.« less

  12. Correction of the spectral calibration of the Joint European Torus core light detecting and ranging Thomson scattering diagnostic using ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawke, J.; Scannell, R.; Maslov, M.

    2013-10-15

    This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less

  13. A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao

    2018-05-01

    This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.

  14. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  15. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  16. Surface areas of fractally rough particles studied by scattering

    NASA Astrophysics Data System (ADS)

    Hurd, Alan J.; Schaefer, Dale W.; Smith, Douglas M.; Ross, Steven B.; Le Méhauté, Alain; Spooner, Steven

    1989-05-01

    The small-angle scattering from fractally rough surfaces has the potential to give information on the surface area at a given resolution. By use of quantitative neutron and x-ray scattering, a direct comparison of surface areas of fractally rough powders was made between scattering and adsorption techniques. This study supports a recently proposed correction to the theory for scattering from fractal surfaces. In addition, the scattering data provide an independent calibration of molecular adsorbate areas.

  17. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  18. The effects of compensation for scatter, lead X-rays, and high-energy contamination on tumor detectability and activity estimation in Ga-67 imaging

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Kijewski, M. F.; Maksud, P.; Moore, S. C.

    2003-06-01

    Compton scatter, lead X-rays, and high-energy contamination are major factors affecting image quality in Ga-67 imaging. Scattered photons detected in one photopeak window include photons exiting the patient at energies within the photopeak, as well as higher energy photons which have interacted in the collimator and crystal and lost energy. Furthermore, lead X-rays can be detected in the main energy photopeak (93 keV). We have previously developed two energy-based methods, based on artificial neural networks (ANN) and on a generalized spectral (GS) approach to compensate for scatter, high-energy contamination, and lead X-rays in Ga-67 imaging. For comparison, we considered also the projections that would be acquired in the clinic using the optimal energy windows (WIN) we have reported previously for tumor detection and estimation tasks for the 93, 185, and 300 keV photopeaks. The aim of the present study is to evaluate under realistic conditions the impact of these phenomena and their compensation on tumor detection and estimation tasks in Ga-67 imaging. ANN and GS were compared on the basis of performance of a three-channel Hotelling observer (CHO), in detecting the presence of a spherical tumor of unknown size embedded in an anatomic background as well as on the basis of estimation of tumor activity. Projection datasets of spherical tumors ranging from 2 to 6 cm in diameter, located at several sites in an anthropomorphic torso phantom, were simulated using a Monte Carlo program that modeled all photon interactions in the patient as well as in the collimator and the detector for all decays between 91 and 888 keV. One hundred realistic noise realizations were generated from each very-low-noise simulated projection dataset. The presence of scatter degraded both CHO signal-to-noise ratio (SNR) and estimation accuracy. On average, the presence of scatter led to a 12% reduction in CHO SNR. Correcting for scatter further diminished CHO SNR but to a lesser extent with ANN (5% reduction) than with GS (12%). Both scatter corrections improved performance in activity estimation. ANN yielded better precision (1.8% relative standard deviation) than did GS (4%) but greater average bias (5.1% with ANN, 3.6% with GS).

  19. Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL

    DOE PAGES

    He, Lilin; Do, Changwoo; Qian, Shuo; ...

    2014-11-25

    Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less

  20. Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4

    2013-11-15

    Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less

  1. Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.

  2. dAcquisition setting optimization and quantitative imaging for 124I studies with the Inveon microPET-CT system.

    PubMed

    Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel

    2012-02-13

    Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.

  3. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  4. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  5. Automated detection of esophageal dysplasia in in vivo optical coherence tomography images of the human esophagus

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Dong, Jing; Tearney, Guillermo J.; Pitris, Costas

    2018-02-01

    Catheter-based Optical Coherence Tomography (OCT) devices allow real-time and comprehensive imaging of the human esophagus. Hence, they provide the potential to overcome some of the limitations of endoscopy and biopsy, allowing earlier diagnosis and better prognosis for esophageal adenocarcinoma patients. However, the large number of images produced during every scan makes manual evaluation of the data exceedingly difficult. In this study, we propose a fully automated tissue characterization algorithm, capable of discriminating normal tissue from Barrett's Esophagus (BE) and dysplasia through entire three-dimensional (3D) data sets, acquired in vivo. The method is based on both the estimation of the scatterer size of the esophageal epithelial cells, using the bandwidth of the correlation of the derivative (COD) method, as well as intensity-based characteristics. The COD method can effectively estimate the scatterer size of the esophageal epithelium cells in good agreement with the literature. As expected, both the mean scatterer size and its standard deviation increase with increasing severity of disease (i.e. from normal to BE to dysplasia). The differences in the distribution of scatterer size for each tissue type are statistically significant, with a p value of < 0.0001. However, the scatterer size by itself cannot be used to accurately classify the various tissues. With the addition of intensity-based statistics the correct classification rates for all three tissue types range from 83 to 100% depending on the lesion size.

  6. Fine-tuning satellite-based rainfall estimates

    NASA Astrophysics Data System (ADS)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  7. Scattering of Femtosecond Laser Pulses on the Negative Hydrogen Ion

    NASA Astrophysics Data System (ADS)

    Astapenko, V. A.; Moroz, N. N.

    2018-05-01

    Elastic scattering of ultrashort laser pulses (USLPs) on the negative hydrogen ion is considered. Results of calculations of the USLP scattering probability are presented and analyzed for pulses of two types: the corrected Gaussian pulse and wavelet pulse without carrier frequency depending on the problem parameters.

  8. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  9. Charm-Quark Production in Deep-Inelastic Neutrino Scattering at Next-to-Next-to-Leading Order in QCD.

    PubMed

    Berger, Edmond L; Gao, Jun; Li, Chong Sheng; Liu, Ze Long; Zhu, Hua Xing

    2016-05-27

    We present a fully differential next-to-next-to-leading order calculation of charm-quark production in charged-current deep-inelastic scattering, with full charm-quark mass dependence. The next-to-next-to-leading order corrections in perturbative quantum chromodynamics are found to be comparable in size to the next-to-leading order corrections in certain kinematic regions. We compare our predictions with data on dimuon production in (anti)neutrino scattering from a heavy nucleus. Our results can be used to improve the extraction of the parton distribution function of a strange quark in the nucleon.

  10. Effects of atmospheric aerosols on scattering reflected visible light from earth resource features

    NASA Technical Reports Server (NTRS)

    Noll, K. E.; Tschantz, B. A.; Davis, W. T.

    1972-01-01

    The vertical variations in atmospheric light attenuation under ambient conditions were identified, and a method through which aerial photographs of earth features might be corrected to yield quantitative information about the actual features was provided. A theoretical equation was developed based on the Bouguer-Lambert extinction law and basic photographic theory.

  11. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.

    2015-06-01

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  12. Correction of nonuniform attenuation and image fusion in SPECT imaging by means of separate X-ray CT.

    PubMed

    Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji

    2002-06-01

    Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.

  13. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  14. Alterations to the relativistic Love-Franey model and their application to inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeile, J.R.

    The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less

  15. Methods for modeling non-equilibrium degenerate statistics and quantum-confined scattering in 3D ensemble Monte Carlo transport simulations

    NASA Astrophysics Data System (ADS)

    Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.

    2016-12-01

    Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.

  16. Electroweak radiative corrections to neutrino scattering at NuTeV

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Baur, Ulrich; Wackeroth, Doreen

    2007-04-01

    The W boson mass extracted by the NuTeV collaboration from the ratios of neutral and charged-current neutrino and anti-neutrino cross sections differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3 σ. Several possible sources for the observed difference have been discussed in the literature, including new physics beyond the Standard Model (SM). However, in order to be able to pin down the cause of this discrepancy and to interpret this result as a deviation to the SM, it is important to include the complete electroweak one-loop corrections when extracting the W boson mass from neutrino scattering cross sections. We will present results of a Monte Carlo program for νN (νN) scattering including the complete electroweak O(α) corrections, which will be used to study the effects of these corrections on the extracted values for the electroweak parameters. We will briefly introduce some of the newly developed computational tools for generating Feynman diagrams and corresponding analytic expressions for one-loop matrix elements.

  17. Pupil-segmentation-based adaptive optics for microscopy

    NASA Astrophysics Data System (ADS)

    Ji, Na; Milkie, Daniel E.; Betzig, Eric

    2011-03-01

    Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.

  18. Dual-energy fluorescent x-ray computed tomography system with a pinhole design: Use of K-edge discontinuity for scatter correction

    PubMed Central

    Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya

    2017-01-01

    We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496

  19. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals.

    PubMed

    Ranasinghesagara, Janaka C; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O; Venugopalan, Vasan

    2017-04-17

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction.

  20. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals

    PubMed Central

    Ranasinghesagara, Janaka C.; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O.; Venugopalan, Vasan

    2017-01-01

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction. PMID:28437941

  1. WE-AB-207A-09: Optimization of the Design of a Moving Blocker for Cone-Beam CT Scatter Correction: Experimental Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, X; Ouyang, L; Jia, X

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different geometry designs and moving speeds of the blocker affect its performance in image reconstruction accuracy. The goal of this work is to optimize the geometric design and moving speed of the moving blocker system through experimental evaluations. Methods: An Elekta Synergy XVI system and an anthropomorphic pelvis phantom CIRS 801-P were used for our experiment. A blocker consisting of lead strips was inserted between the x-ray source and the phantom moving back and forth along rotation axis to measure the scattermore » signal. Accoriding to our Monte Carlo simulation results, three blockers were used, which have the same lead strip width 3.2mm and different gap between neighboring lead strips, 3.2, 6.4 and 9.6mm. For each blocker, three moving speeds were evaluated, 10, 20 and 30 pixels per projection (on the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline based interpolation from the blocked region. CBCT image was reconstructed by a total variation (TV) based algebraic iterative reconstruction (ART) algorithm from the partially blocked projection data. Reconstruction accuracy in each condition is quantified as CT number error of region of interest (ROI) by comparing to a CBCT reconstructed image from analytically simulated unblocked and scatter free projection data. Results: Highest reconstruction accuracy is achieved when the blocker width is 3.2 mm, the gap between neighboring lead strips is 9.6 mm and the moving speed is 20 pixels per projection. RMSE of the CT number of ROIs can be reduced from 436 to 27. Conclusions: Image reconstruction accuracy is greatly affected by the geometry design of the blocker. The moving speed does not have a very strong effect on reconstruction result if it is over 20 pixels per projection.« less

  2. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  3. HMI Data Corrected for Stray Light Now Available

    NASA Astrophysics Data System (ADS)

    Norton, A. A.; Duvall, T. L.; Schou, J.; Cheung, M. C. M.; Scherrer, P. H.

    2016-10-01

    The form of the point spread function (PSF) derived for HMI is an Airy function convolved with a Lorentzian. The parameters are bound by observational ground-based testing of the instrument conducted prior to launch (Wachter et al., 2012), by full-disk data used to evaluate the off-limb behavior of the scattered light, as well as by data obtained during the Venus transit. The PSF correction has been programmed in both C and cuda C and runs within the JSOC environment using either a CPU or GPU. A single full-disk intensity image can be deconvolved in less than one second. The PSF is described in more detail in Couvidat et al. (2016) and has already been used by Hathaway et al. (2015) to forward-model solar-convection spectra, by Krucker et al. (2015) to investigate footpoints of off-limb solar flares and by Whitney, Criscuoli and Norton (2016) to examine the relations between intensity contrast and magnetic field strengths. In this presentation, we highlight the changes to umbral darkness, granulation contrast and plage field strengths that result from stray light correction. A twenty-four hour period of scattered-light corrected HMI data from 2010.08.03, including the isolated sunspot NOAA 11092, is currently available for anyone. Requests for additional time periods of interest are welcome and will be processed by the HMI team.

  4. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  5. A New Method for Atmospheric Correction of MRO/CRISM Data.

    NASA Astrophysics Data System (ADS)

    Noe Dobrea, Eldar Z.; Dressing, C.; Wolff, M. J.

    2009-09-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) collects hyperspectral images from 0.362 to 3.92 μm at 6.55 nanometers/channel, and at a spatial resolution of 20 m/pixel. The 1-2.6 μm spectral range is often used to identify and map the distribution of hydrous minerals using mineralogically diagnostic bands at 1.4 μm, 1.9 μm, and 2 - 2.5 micron region. Atmospheric correction of the 2-μm CO2 band typically employs the same methodology applied to OMEGA data (Mustard et al., Nature 454, 2008): an atmospheric opacity spectrum, obtained from the ratio of spectra from the base to spectra from the peak of Olympus Mons, is rescaled for each spectrum in the observation to fit the 2-μm CO2 band, and is subsequently used to correct the data. Three important aspects are not considered in this correction: 1) absorptions due to water vapor are improperly accounted for, 2) the band-center of each channel shifts slightly with time, and 3) multiple scattering due to atmospheric aerosols is not considered. The second issue results in miss-registration of the sharp CO2 features in the 2-μm triplet, and hence poor atmospheric correction. This leads to the necessity to ratio all spectra using the spectrum of a spectrally "bland” region in each observation in order to distinguish features 1.9 μm. Here, we present an improved atmospheric correction method, which uses emission phase function (EPF) observations to correct for molecular opacity, and a discrete ordinate radiative transfer algorithm (DISORT - Stamnes et al., Appl. Opt. 27, 1988) to correct for the effects of multiple scattering. This method results in a significant improvement in the correction of the 2-μm CO2 band, allowing us to forgo the use of spectral ratios that affect the spectral shape and preclude the derivation of reflectance values in the data.

  6. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  7. Variability of adjacency effects in sky reflectance measurements.

    PubMed

    Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M

    2017-09-01

    Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.

  8. Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault

    USGS Publications Warehouse

    Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.

    2004-01-01

    Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.

  9. Simulation of inverse Compton scattering and its implications on the scattered linewidth

    NASA Astrophysics Data System (ADS)

    Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.

    2018-03-01

    Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.

  10. Simulation of inverse Compton scattering and its implications on the scattered linewidth

    DOE PAGES

    Ranjan, N.; Terzić, B.; Krafft, G. A.; ...

    2018-03-06

    Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less

  11. Heat-Flux Measurements in Laser-Produced Plasmas Using Thomson Scattering from Electron Plasma Waves

    NASA Astrophysics Data System (ADS)

    Henchen, R. J.; Goncharov, V. N.; Cao, D.; Katz, J.; Froula, D. H.; Rozmus, W.

    2017-10-01

    An experiment was designed to measure heat flux in coronal plasmas using collective Thomson scattering. Adjustments to the electron distribution function resulting from heat flux affect the shape of the collective Thomson scattering features through wave-particle resonance. The amplitude of the Spitzer-Härm electron distribution function correction term (f1) was varied to match the data and determines the value of the heat flux. Independent measurements of temperature and density obtained from Thomson scattering were used to infer the classical heat flux (q = - κ∇Te) . Time-resolved Thomson-scattering data were obtained at five locations in the corona along the target normal in a blowoff plasma formed from a planar Al target with 1.5 kJ of 351-nm laser light in a 2-ns square pulse. The flux measured through the Thomson-scattering spectra is a factor of 5 less than the κ∇Te measurements. The lack of collisions of heat-carrying electrons suggests a nonlocal model is needed to accurately describe the heat flux. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  12. Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidars.

    PubMed

    She, C Y

    2001-09-20

    It is well known that scattering lidars, i.e., Mie, aerosol-wind, Rayleigh, high-spectral-resolution, molecular-wind, rotational Raman, and vibrational Raman lidars, are workhorses for probing atmospheric properties, including the backscatter ratio, aerosol extinction coefficient, temperature, pressure, density, and winds. The spectral structure of molecular scattering (strength and bandwidth) and its constituent spectra associated with Rayleigh and vibrational Raman scattering are reviewed. Revisiting the correct name by distinguishing Cabannes scattering from Rayleigh scattering, and sharpening the definition of each scattering component in the Rayleigh scattering spectrum, the review allows a systematic, logical, and useful comparison in strength and bandwidth between each scattering component and in receiver bandwidths (for both nighttime and daytime operation) between the various scattering lidars for atmospheric sensing.

  13. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  14. Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.

    2016-01-01

    Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.

  15. Contrast enhanced imaging with a stationary digital breast tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Puett, Connor; Calliste, Jabari; Wu, Gongting; Inscoe, Christina R.; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2017-03-01

    Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.

  16. Testing near-infrared spectrophotometry using a liquid neonatal head phantom

    NASA Astrophysics Data System (ADS)

    Wolf, Martin; Baenziger, Oskar; Keel, Matthias; Dietz, Vera; von Siebenthal, Kurt; Bucher, Hans U.

    1998-12-01

    We constructed a liquid phantom, which mimics the neonatal head for testing near infrared spectrophotometry instruments. It consists of a spherical, 3.5 mm thick layer of silicone rubber simulating skin and bone and acts as container for a liquid solution with IntralipidTM, 60 micrometers ol/l haemoglobin and yeast. The IntralipidTM concentration was varied to test the influence of scattering on haemoglobin concentrations and tissue oxygenation determined by the Critikon 2020. The solution was oxygenated using pure oxygen and then deoxygenated by the yeast. For the instruments algorithm, we found with increasing scattering (0.5%, 1%, 1.5% and 2% IntralipidTM concentration) an increasing offset added to the oxy- (56.7, 90.8, 112.5, 145.2 micrometers ol/l respectively) and deoxyhaemoglobin (25.4, 44.3, 58.5, 65.9 micrometers ol/l) concentration causing a decreasing range (41.3, 31.3, 25.0, 22.2%) of the tissue oxygen saturation reading. However, concentration changes were quantified correctly independently of the scattering level. For an other algorithm based on the analytical solution the offsets were smaller: oxyhaemoglobin 12.2, 34.0, 53.2, 88.8 micrometers ol/l and deoxyhaemoglobin 1.6, 11.2, 22.2, 28.1 micrometers ol/l. The range of the tissue oxygen saturation reading was higher: 71.3, 55.5, 45.7, 39.4%. However, concentration changes were not quantified correctly and depended on scattering. This study demonstrates the need to develop algorithms, which take into consideration the anatomical structures.

  17. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  18. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  19. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  20. Solar Cycle Variability and Grand Minima Induced by Joy's Law Scatter

    NASA Astrophysics Data System (ADS)

    Karak, Bidya Binay; Miesch, Mark S.

    2017-08-01

    The strength of the solar cycle varies from one cycle to another in an irregular manner and the extreme example of this irregularity is the Maunder minimum when Sun produced only a few spots for several years. We explore the cause of these variabilities using a 3D Babcock--Leighton dynamo. In this model, based on the toroidal flux at the base of the convection zone, bipolar magnetic regions (BMRs) are produced with flux, tilt angle, and time of emergence all obtain from their observed distributions. The dynamo growth is limited by a tilt quenching.The randomnesses in the BMR emergences make the poloidal field unequal and eventually cause an unequal solar cycle. When observed fluctuations of BMR tilts around Joy's law, i.e., a standard deviation of 15 degrees, are considered, our model produces a variation in the solar cycle comparable to the observed solar cycle variability. Tilt scatter also causes occasional Maunder-like grand minima, although the observed scatter does not reproduce correct statistics of grand minima. However, when we double the tilt scatter, we find grand minima consistent with observations. Importantly, our dynamo model can operate even during grand minima with only a few BMRs, without requiring any additional alpha effect.

  1. A curvature-corrected Kirchhoff formulation for radar sea-return from the near vertical

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.

    1974-01-01

    A new theoretical treatment of the problem of electromagnetic wave scattering from a randomly rough surface is given. A high frequency correction to the Kirchhoff approximation is derived from a field integral equation for a perfectly conducting surface. The correction, which accounts for the effect of local surface curvature, is seen to be identical with an asymptotic form found by Fock (1945) for diffraction by a paraboloid. The corrected boundary values are substituted into the far field Stratton-Chu integral, and average backscattered powers are computed assuming the scattering surface is a homogeneous Gaussian process. Preliminary calculations for K(-4) ocean wave spectrum indicate a resonable modelling of polarization effects near the vertical, theta 45 deg. Correspondence with the results of small perturbation theory is shown.

  2. Theoretical model of x-ray scattering as a dense matter probe.

    PubMed

    Gregori, G; Glenzer, S H; Rozmus, W; Lee, R W; Landen, O L

    2003-02-01

    We present analytical expressions for the dynamic structure factor, or form factor S(k,omega), which is the quantity describing the x-ray cross section from a dense plasma or a simple liquid. Our results, based on the random phase approximation for the treatment on the charged particle coupling, can be applied to describe scattering from either weakly coupled classical plasmas or degenerate electron liquids. Our form factor correctly reproduces the Compton energy down-shift and the known Fermi-Dirac electron velocity distribution for S(k,omega) in the case of a cold degenerate plasma. The usual concept of scattering parameter is also reinterpreted for the degenerate case in order to include the effect of the Thomas-Fermi screening. The results shown in this work can be applied to interpreting x-ray scattering in warm dense plasmas occurring in inertial confinement fusion experiments or for the modeling of solid density matter found in the interior of planets.

  3. An initial analysis of short- and medium-range correlations potential non-Pt catalysts in CoNx

    NASA Astrophysics Data System (ADS)

    Peterson, Joe

    2009-10-01

    A potential show stopper for the development of fuel cells for the commercial automotive industry is the design of low-cost catalysts. The best catalysts are based on platinum, which is a rare and expensive noble metal. Our group has been involved in the characterization of potential materials for non-Pt catalysts. In this presentation, I will present some preliminary neutron scattering data from a nanocrystalline powder sample of CoNx. It is apparent that the diffraction data cannot be analyzed with standard Riedveld refinement, and we have to invoke pair distribution function (PDF) analysis. The PDF provides insight into short-range correlations, as it measures the probabilities of short- and mid-range interatomic distances in a material. The analysis reveals a strong incoherent scattering response, which is indicative of the presence of hydrogen in the sample. After correcting for the incoherent scattering, one obtains the normalized scattering function S(Q), whose Fourier transform yields the PDF.

  4. An initial analysis of short- and medium-range correlations potential non-Pt catalysts in CoNx

    NASA Astrophysics Data System (ADS)

    Peterson, Joe

    2010-03-01

    A potential show stopper for the development of fuel cells for the commercial automotive industry is the design of low-cost catalysts. The best catalysts are based on platinum, which is a rare and expensive noble metal. Our group has been involved in the characterization of potential materials for non-Pt catalysts. In this presentation, I will present some preliminary neutron scattering data from a nanocrystalline powder sample of CoNx. It is apparent that the diffraction data cannot be analyzed with standard Riedveld refinement, and we have to invoke pair distribution function (PDF) analysis. The PDF provides insight into short-range correlations, as it measures the probabilities of short- and mid-range interatomic distances in a material. The analysis reveals a strong incoherent scattering response, which is indicative of the presence of hydrogen in the sample. After correcting for the incoherent scattering, one obtains the normalized scattering function S(Q), whose Fourier transform yields the PDF.

  5. Broadband true time delay for microwave signal processing, using slow light based on stimulated Brillouin scattering in optical fibers.

    PubMed

    Chin, Sanghoon; Thévenaz, Luc; Sancho, Juan; Sales, Salvador; Capmany, José; Berger, Perrine; Bourderionnet, Jérôme; Dolfi, Daniel

    2010-10-11

    We experimentally demonstrate a novel technique to process broadband microwave signals, using all-optically tunable true time delay in optical fibers. The configuration to achieve true time delay basically consists of two main stages: photonic RF phase shifter and slow light, based on stimulated Brillouin scattering in fibers. Dispersion properties of fibers are controlled, separately at optical carrier frequency and in the vicinity of microwave signal bandwidth. This way time delay induced within the signal bandwidth can be manipulated to correctly act as true time delay with a proper phase compensation introduced to the optical carrier. We completely analyzed the generated true time delay as a promising solution to feed phased array antenna for radar systems and to develop dynamically reconfigurable microwave photonic filters.

  6. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction

    NASA Astrophysics Data System (ADS)

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-06-01

    Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.

  7. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction.

    PubMed

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-05-03

    Four-dimensional (4D) X-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose to use a moving blocker (MB) during the 4D CBCT acquisition ("4D MB") and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the X-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics. © 2018 Institute of Physics and Engineering in Medicine.

  8. Retrieval of the scattering and microphysical properties of aerosols from ground-based optical measurements including polarization. I. Method.

    PubMed

    Vermeulen, A; Devaux, C; Herman, M

    2000-11-20

    A method has been developed for retrieving the scattering and microphysical properties of atmospheric aerosol from measurements of solar transmission, aureole, and angular distribution of the scattered and polarized sky light in the solar principal plane. Numerical simulations of measurements have been used to investigate the feasibility of the method and to test the algorithm's performance. It is shown that the absorption and scattering properties of an aerosol, i.e., the single-scattering albedo, the phase function, and the polarization for single scattering of incident unpolarized light, can be obtained by use of radiative transfer calculations to correct the values of scattered radiance and polarized radiance for multiple scattering, Rayleigh scattering, and the influence of ground. The method requires only measurement of the aerosol's optical thickness and an estimate of the ground's reflectance and does not need any specific assumption about properties of the aerosol. The accuracy of the retrieved phase function and polarization of the aerosols is examined at near-infrared wavelengths (e.g., 0.870 mum). The aerosol's microphysical properties (size distribution and complex refractive index) are derived in a second step. The real part of the refractive index is a strong function of the polarization, whereas the imaginary part is strongly dependent on the sky's radiance and the retrieved single-scattering albedo. It is demonstrated that inclusion of polarization data yields the real part of the refractive index.

  9. Scattering analysis of LOFAR pulsar observations

    NASA Astrophysics Data System (ADS)

    Geyer, M.; Karastergiou, A.; Kondratiev, V. I.; Zagkouris, K.; Kramer, M.; Stappers, B. W.; Grießmeier, J.-M.; Hessels, J. W. T.; Michilli, D.; Pilia, M.; Sobey, C.

    2017-09-01

    We measure the effects of interstellar scattering on average pulse profiles from 13 radio pulsars with simple pulse shapes. We use data from the LOFAR High Band Antennas, at frequencies between 110 and 190 MHz. We apply a forward fitting technique, and simultaneously determine the intrinsic pulse shape, assuming single Gaussian component profiles. We find that the constant τ, associated with scattering by a single thin screen, has a power-law dependence on frequency τ ∝ ν-α, with indices ranging from α = 1.50 to 4.0, despite simplest theoretical models predicting α = 4.0 or 4.4. Modelling the screen as an isotropic or extremely anisotropic scatterer, we find anisotropic scattering fits lead to larger power-law indices, often in better agreement with theoretically expected values. We compare the scattering models based on the inferred, frequency-dependent parameters of the intrinsic pulse, and the resulting correction to the dispersion measure (DM). We highlight the cases in which fits of extreme anisotropic scattering are appealing, while stressing that the data do not strictly favour either model for any of the 13 pulsars. The pulsars show anomalous scattering properties that are consistent with finite scattering screens and/or anisotropy, but these data alone do not provide the means for an unambiguous characterization of the screens. We revisit the empirical τ versus DM relation and consider how our results support a frequency dependence of α. Very long baseline interferometry, and observations of the scattering and scintillation properties of these sources at higher frequencies, will provide further evidence.

  10. Aspherical-atom modeling of coordination compounds by single-crystal X-ray diffraction allows the correct metal atom to be identified.

    PubMed

    Dittrich, Birger; Wandtke, Claudia M; Meents, Alke; Pröpper, Kevin; Mondal, Kartik Chandra; Samuel, Prinson P; Amin Sk, Nurul; Singh, Amit Pratap; Roesky, Herbert W; Sidhu, Navdeep

    2015-02-02

    Single-crystal X-ray diffraction (XRD) is often considered the gold standard in analytical chemistry, as it allows element identification as well as determination of atom connectivity and the solid-state structure of completely unknown samples. Element assignment is based on the number of electrons of an atom, so that a distinction of neighboring heavier elements in the periodic table by XRD is often difficult. A computationally efficient procedure for aspherical-atom least-squares refinement of conventional diffraction data of organometallic compounds is proposed. The iterative procedure is conceptually similar to Hirshfeld-atom refinement (Acta Crystallogr. Sect. A- 2008, 64, 383-393; IUCrJ. 2014, 1,61-79), but it relies on tabulated invariom scattering factors (Acta Crystallogr. Sect. B- 2013, 69, 91-104) and the Hansen/Coppens multipole model; disordered structures can be handled as well. Five linear-coordinate 3d metal complexes, for which the wrong element is found if standard independent-atom model scattering factors are relied upon, are studied, and it is shown that only aspherical-atom scattering factors allow a reliable assignment. The influence of anomalous dispersion in identifying the correct element is investigated and discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Stochastic analysis of surface roughness models in quantum wires

    NASA Astrophysics Data System (ADS)

    Nedjalkov, Mihail; Ellinghaus, Paul; Weinbub, Josef; Sadi, Toufik; Asenov, Asen; Dimov, Ivan; Selberherr, Siegfried

    2018-07-01

    We present a signed particle computational approach for the Wigner transport model and use it to analyze the electron state dynamics in quantum wires focusing on the effect of surface roughness. Usually surface roughness is considered as a scattering model, accounted for by the Fermi Golden Rule, which relies on approximations like statistical averaging and in the case of quantum wires incorporates quantum corrections based on the mode space approach. We provide a novel computational approach to enable physical analysis of these assumptions in terms of phase space and particles. Utilized is the signed particles model of Wigner evolution, which, besides providing a full quantum description of the electron dynamics, enables intuitive insights into the processes of tunneling, which govern the physical evolution. It is shown that the basic assumptions of the quantum-corrected scattering model correspond to the quantum behavior of the electron system. Of particular importance is the distribution of the density: Due to the quantum confinement, electrons are kept away from the walls, which is in contrast to the classical scattering model. Further quantum effects are retardation of the electron dynamics and quantum reflection. Far from equilibrium the assumption of homogeneous conditions along the wire breaks even in the case of ideal wire walls.

  12. Using phase for radar scatterer classification

    NASA Astrophysics Data System (ADS)

    Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.

    2017-04-01

    Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.

  13. US-SOMO HPLC-SAXS module: dealing with capillary fouling and extraction of pure component patterns from poorly resolved SEC-SAXS data

    PubMed Central

    Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier

    2016-01-01

    Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419

  14. Unsupervised Classification of PolSAR Data Using a Scattering Similarity Measure Derived From a Geodesic Distance

    NASA Astrophysics Data System (ADS)

    Ratha, Debanshu; Bhattacharya, Avik; Frery, Alejandro C.

    2018-01-01

    In this letter, we propose a novel technique for obtaining scattering components from Polarimetric Synthetic Aperture Radar (PolSAR) data using the geodesic distance on the unit sphere. This geodesic distance is obtained between an elementary target and the observed Kennaugh matrix, and it is further utilized to compute a similarity measure between scattering mechanisms. The normalized similarity measure for each elementary target is then modulated with the total scattering power (Span). This measure is used to categorize pixels into three categories i.e. odd-bounce, double-bounce and volume, depending on which of the above scattering mechanisms dominate. Then the maximum likelihood classifier of [J.-S. Lee, M. R. Grunes, E. Pottier, and L. Ferro-Famil, Unsupervised terrain classification preserving polarimetric scattering characteristics, IEEE Trans. Geos. Rem. Sens., vol. 42, no. 4, pp. 722731, April 2004.] based on the complex Wishart distribution is iteratively used for each category. Dominant scattering mechanisms are thus preserved in this classification scheme. We show results for L-band AIRSAR and ALOS-2 datasets acquired over San Francisco and Mumbai, respectively. The scattering mechanisms are better preserved using the proposed methodology than the unsupervised classification results using the Freeman-Durden scattering powers on an orientation angle (OA) corrected PolSAR image. Furthermore, (1) the scattering similarity is a completely non-negative quantity unlike the negative powers that might occur in double- bounce and odd-bounce scattering component under Freeman Durden decomposition (FDD), and (2) the methodology can be extended to more canonical targets as well as for bistatic scattering.

  15. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  16. Reciprocal space mapping and single-crystal scattering rods.

    PubMed

    Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao

    2005-11-01

    Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.

  17. Quasi-elastic nuclear scattering at high energies

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.

    1992-01-01

    The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.

  18. Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance

    NASA Astrophysics Data System (ADS)

    Hugtenburg, Richard P.; Bradley, David A.

    2004-01-01

    The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.

  19. Chiral symmetry constraints on resonant amplitudes

    NASA Astrophysics Data System (ADS)

    Bruns, Peter C.; Mai, Maxim

    2018-03-01

    We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.

  20. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  1. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  2. A model of primary and scattered photon fluence for mammographic x-ray image quantification

    NASA Astrophysics Data System (ADS)

    Tromans, Christopher E.; Cocker, Mary R.; Brady, Michael, Sir

    2012-10-01

    We present an efficient method to calculate the primary and scattered x-ray photon fluence component of a mammographic image. This can be used for a range of clinically important purposes, including estimation of breast density, personalized image display, and quantitative mammogram analysis. The method is based on models of: the x-ray tube; the digital detector; and a novel ray tracer which models the diverging beam emanating from the focal spot. The tube model includes consideration of the anode heel effect, and empirical corrections for wear and manufacturing tolerances. The detector model is empirical, being based on a family of transfer functions that cover the range of beam qualities and compressed breast thicknesses which are encountered clinically. The scatter estimation utilizes optimal information sampling and interpolation (to yield a clinical usable computation time) of scatter calculated using fundamental physics relations. A scatter kernel arising around each primary ray is calculated, and these are summed by superposition to form the scatter image. Beam quality, spatial position in the field (in particular that arising at the air-boundary due to the depletion of scatter contribution from the surroundings), and the possible presence of a grid, are considered, as is tissue composition using an iterative refinement procedure. We present numerous validation results that use a purpose designed tissue equivalent step wedge phantom. The average differences between actual acquisitions and modelled pixel intensities observed across the adipose to fibroglandular attenuation range vary between 5% and 7%, depending on beam quality and, for a single beam quality are 2.09% and 3.36% respectively with and without a grid.

  3. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  4. Compact Polarimetry in a Low Frequency Spaceborne Context

    NASA Technical Reports Server (NTRS)

    Truong-Loi, M-L.; Freeman, A.; Dubois-Fernandez, P.; Pottier, E.

    2011-01-01

    Compact polarimetry has been shown to be an interesting alternative mode to full polarimetry when global coverage and revisit time are key issues. It consists on transmitting a single polarization, while receiving on two. Several critical points have been identified, one being the Faraday rotation (FR) correction and the other the calibration. When a low frequency electromagnetic wave travels through the ionosphere, it undergoes a rotation of the polarization plane about the radar line of sight for a linearly polarized wave, and a simple phase shift for a circularly polarized wave. In a low frequency radar, the only possible choice of the transmit polarization is the circular one, in order to guaranty that the scattering element on the ground is illuminated with a constant polarization independently of the ionosphere state. This will allow meaningful time series analysis, interferometry as long as the Faraday rotation effect is corrected for the return path. In full-polarimetric (FP) mode, two techniques allow to estimate the FR: Freeman method using linearly polarized data, and Bickel and Bates theory based on the transformation of the measured scattering matrix to a circular basis. In CP mode, an alternate procedure is presented which relies on the bare surface scattering properties. These bare surfaces are selected by the conformity coefficient, invariant with FR. This coefficient is compared to other published classifications to show its potential in distinguishing three different scattering types: surface, doublebounce and volume. The performances of the bare surfaces selection and FR estimation are evaluated on PALSAR and airborne data. Once the bare surfaces are selected and Faraday angle estimated over them, the correction can be applied over the whole scene. The algorithm is compared with both FP techniques. In the last part of the paper, the calibration of a CP system from the point of view of classical matrix transformation methods in polarimetry is proposed.

  5. Electron collisions with phenol: Total, integral, differential, and momentum transfer cross sections and the role of multichannel coupling effects on the elastic channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Romarly F. da; Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-580 Santo André, São Paulo; Oliveira, Eliane M. de

    2015-03-14

    We report theoretical and experimental total cross sections for electron scattering by phenol (C{sub 6}H{sub 5}OH). The experimental data were obtained with an apparatus based in Madrid and the calculated cross sections with two different methodologies, the independent atom method with screening corrected additivity rule (IAM-SCAR), and the Schwinger multichannel method with pseudopotentials (SMCPP). The SMCPP method in the N{sub open}-channel coupling scheme, at the static-exchange-plus-polarization approximation, is employed to calculate the scattering amplitudes at impact energies ranging from 5.0 eV to 50 eV. We discuss the multichannel coupling effects in the calculated cross sections, in particular how the numbermore » of excited states included in the open-channel space impacts upon the convergence of the elastic cross sections at higher collision energies. The IAM-SCAR approach was also used to obtain the elastic differential cross sections (DCSs) and for correcting the experimental total cross sections for the so-called forward angle scattering effect. We found a very good agreement between our SMCPP theoretical differential, integral, and momentum transfer cross sections and experimental data for benzene (a molecule differing from phenol by replacing a hydrogen atom in benzene with a hydroxyl group). Although some discrepancies were found for lower energies, the agreement between the SMCPP data and the DCSs obtained with the IAM-SCAR method improves, as expected, as the impact energy increases. We also have a good agreement among the present SMCPP calculated total cross section (which includes elastic, 32 inelastic electronic excitation processes and ionization contributions, the latter estimated with the binary-encounter-Bethe model), the IAM-SCAR total cross section, and the experimental data when the latter is corrected for the forward angle scattering effect [Fuss et al., Phys. Rev. A 88, 042702 (2013)].« less

  6. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  7. Atmospheric monitoring in MAGIC and data corrections

    NASA Astrophysics Data System (ADS)

    Fruck, Christian; Gaug, Markus

    2015-03-01

    A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.

  8. The atmospheric correction algorithm for HY-1B/COCTS

    NASA Astrophysics Data System (ADS)

    He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun

    2008-10-01

    China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.

  9. Density-functional calculations of transport properties in the nondegenerate limit and the role of electron-electron scattering

    DOE PAGES

    Desjarlais, Michael P.; Scullard, Christian R.; Benedict, Lorin X.; ...

    2017-03-13

    We compute electrical and thermal conductivities of hydrogen plasmas in the non-degenerate regime using Kohn-Sham Density Functional Theory (DFT) and an application of the Kubo- Greenwood response formula, and demonstrate that for thermal conductivity, the mean-field treatment of the electron-electron (e-e) interaction therein is insufficient to reproduce the weak-coupling limit obtained by plasma kinetic theories. An explicit e-e scattering correction to the DFT is posited by appealing to Matthiessen's Rule and the results of our computations of conductivities with the quantum Lenard-Balescu (QLB) equation. Further motivation of our correction is provided by an argument arising from the Zubarev quantum kineticmore » theory approach. Significant emphasis is placed on our efforts to produce properly converged results for plasma transport using Kohn-Sham DFT, so that an accurate assessment of the importance and efficacy of our e-e scattering corrections to the thermal conductivity can be made.« less

  10. Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities

    NASA Astrophysics Data System (ADS)

    Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.

    2012-03-01

    A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.

  11. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  12. Evaluation of various energy windows at different radionuclides for scatter and attenuation correction in nuclear medicine.

    PubMed

    Asgari, Afrouz; Ashoor, Mansour; Sohrabpour, Mostafa; Shokrani, Parvaneh; Rezaei, Ali

    2015-05-01

    Improving signal to noise ratio (SNR) and qualified images by the various methods is very important for detecting the abnormalities at the body organs. Scatter and attenuation of photons by the organs lead to errors in radiopharmaceutical estimation as well as degradation of images. The choice of suitable energy window and the radionuclide have a key role in nuclear medicine which appearing the lowest scatter fraction as well as having a nearly constant linear attenuation coefficient as a function of phantom thickness. The energy windows of symmetrical window (SW), asymmetric window (ASW), high window (WH) and low window (WL) using Tc-99m and Sm-153 radionuclide with solid water slab phantom (RW3) and Teflon bone phantoms have been compared, and Matlab software and Monte Carlo N-Particle (MCNP4C) code were modified to simulate these methods and obtaining the amounts of FWHM and full width at tenth maximum (FWTM) using line spread functions (LSFs). The experimental data were obtained from the Orbiter Scintron gamma camera. Based on the results of the simulation as well as experimental work, the performance of WH and ASW display of the results, lowest scatter fraction as well as constant linear attenuation coefficient as a function of phantom thickness. WH and ASW were optimal windows in nuclear medicine imaging for Tc-99m in RW3 phantom and Sm-153 in Teflon bone phantom. Attenuation correction was done for WH and ASW optimal windows and for these radionuclides using filtered back projection algorithm. Results of simulation and experimental show that very good agreement between the set of experimental with simulation as well as theoretical values with simulation data were obtained which was nominally less than 7.07 % for Tc-99m and less than 8.00 % for Sm-153. Corrected counts were not affected by the thickness of scattering material. The Simulated results of Line Spread Function (LSF) for Sm-153 and Tc-99m in phantom based on four windows and TEW method were indicated that the FWHM and FWTM values were approximately the same in TEW method and WH and ASW, but the sensitivity at the optimal window was more than that of the other one. The suitable determination of energy window width on the energy spectra can be useful in optimal design to improve efficiency and contrast. It is found that the WH is preferred to the ASW and the ASW is preferred to the SW.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, S; Meredith, R; Azure, M

    Purpose: To support the phase I trial for toxicity, biodistribution and pharmacokinetics of intra-peritoneal (IP) 212Pb-TCMC-trastuzumab in patients with HER-2 expressing malignancy. A whole body gamma camera imaging method was developed for estimating amount of 212Pb-TCMC-trastuzumab left in the peritoneal cavity. Methods: {sup 212}Pb decays to {sup 212}Bi via beta emission. {sup 212}Bi emits an alpha particle at an average of 6.1 MeV. The 238.6 keV gamma ray with a 43.6% yield can be exploited for imaging. Initial phantom was made of saline bags with 212Pb. Images were collected for 238.6 keV with a medium energy general purpose collimator. Theremore » are other high energy gamma emissions (e.g. 511keV, 8%; 583 keV, 31%) that penetrate the septae of the collimator and contribute scatter into 238.6 keV. An upper scatter window was used for scatter correction for these high energy gammas. Results: A small source containing 212Pb can be easily visualized. Scatter correction on images of a small 212Pb source resulted in a ∼50% reduction in the full width at tenth maximum (FWTM), while change in full width at half maximum (FWHM) was <10%. For photopeak images, substantial scatter around phantom source extended to > 5 cm outside; scatter correction improved image contrast by removing this scatter around the sources. Patient imaging, in the 1st cohort (n=3) showed little redistribution of 212Pb-TCMC-trastuzumab out of the peritoneal cavity. Compared to the early post-treatment images, the 18-hour post-injection images illustrated the shift to more uniform anterior/posterior abdominal distribution and the loss of intensity due to radioactive decay. Conclusion: Use of medium energy collimator, 15% width of 238.6 keV photopeak, and a 7.5% upper scatter window is adequate for quantification of 212Pb radioactivity inside peritoneal cavity for alpha radioimmunotherapy of ovarian cancer. Research Support: AREVA Med, NIH 1UL1RR025777-01.« less

  14. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    PubMed Central

    Wang, L L W; Perles, L A; Archambault, L; Sahoo, N; Mirkovic, D; Beddar, S

    2013-01-01

    The plastic scintillation detectors (PSD) have many advantages over other detectors in small field dosimetry due to its high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs will undergo a quenching effect which makes the signal level reduced significantly when the detector is close to Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene based PSD (BCF-12, ϕ0.5mm×4mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately. PMID:23128412

  15. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    NASA Astrophysics Data System (ADS)

    Wang, L. L. W.; Perles, L. A.; Archambault, L.; Sahoo, N.; Mirkovic, D.; Beddar, S.

    2012-12-01

    Plastic scintillation detectors (PSDs) have many advantages over other detectors in small field dosimetry due to their high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs undergo a quenching effect which makes the signal level reduced significantly when the detector is close to the Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene-based PSD (BCF-12, ϕ0.5 mm × 4 mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between the QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately.

  16. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  17. A theoretical study on the impact of particle scattering on the channel characteristics of underwater optical communication system

    NASA Astrophysics Data System (ADS)

    Sahu, Sanjay Kumar; Shanmugam, Palanisamy

    2018-02-01

    Scattering by water molecules and particulate matters determines the path and distance of photon propagation in underwater medium. Consequently, photon angle of scattering (given by scattering phase function) requires to be considered in addition to the extinction coefficient of the aquatic medium governed by the absorption and scattering coefficients in channel characterization for an underwater wireless optical communication (UWOC) system. This study focuses on analyzing the received signal power and impulse response of UWOC channel based on Monte-Carlo simulations for different water types, link distances, link geometries and transceiver parameters. A newly developed scattering phase function (referred to as SS phase function), which represents the real water types more accurately like the Petzold phase function, is considered for quantification of the channel characteristics along with the effects of absorption and scattering coefficients. A comparison between the results simulated using various phase function models and the experimental measurements of Petzold revealed that the SS phase function model predicts values closely matching with the actual values of the Petzold's phase function, which further establishes the importance of using a correct scattering phase function model while estimating the channel capacity of UWOC system in terms of the received power and channel impulse response. Results further demonstrate a great advantage of considering the nonzero probability of receiving scattered photons in estimating channel capacity rather than considering the reception of only ballistic photons as in Beer's Law, which severely underestimates the received power and affects the range of communication especially in the scattering water column. The received power computed based on the Monte-Carlo method by considering the receiver aperture sizes and field of views in different water types are further analyzed and discussed. These results are essential for evaluating the underwater link budget and constructing different system and design parameters for an UWOC system.

  18. Measurement of neutrino flux from neutrino-electron elastic scattering

    DOE PAGES

    Park, J.; Aliaga, L.; Altinok, O.; ...

    2016-06-10

    Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently, a measurement of this process in an accelerator-based ν μ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to ~10% due to uncertainties in hadron production and focusing. We also isolated a sample of 135±17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI ν μ fluxmore » from 9% to 6%. Finally, our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies.« less

  19. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  20. Measurement of neutrino flux from neutrino-electron elastic scattering

    NASA Astrophysics Data System (ADS)

    Park, J.; Aliaga, L.; Altinok, O.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Cai, T.; Carneiro, M. F.; Christy, M. E.; Chvojka, J.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Eberly, B.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Ghosh, A.; Golan, T.; Gran, R.; Harris, D. A.; Higuera, A.; Kleykamp, J.; Kordosky, M.; Le, T.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; Martinez Caicedo, D. A.; McFarland, K. S.; McGivern, C. L.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nuruzzaman; Osta, J.; Paolone, V.; Patrick, C. E.; Perdue, G. N.; Rakotondravohitra, L.; Ramirez, M. A.; Ray, H.; Ren, L.; Rimal, D.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Tagg, N.; Tice, B. G.; Valencia, E.; Walton, T.; Wolcott, J.; Wospakrik, M.; Zavala, G.; Zhang, D.; Miner ν A Collaboration

    2016-06-01

    Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently a measurement of this process in an accelerator-based νμ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to ˜10 % due to uncertainties in hadron production and focusing. We have isolated a sample of 135 ±17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI νμ flux from 9% to 6%. Our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies.

  1. Neutron and X-ray total scattering study of hydrogen disorder in fully hydrated hydrogrossular, Ca3Al2(O4H4)3

    NASA Astrophysics Data System (ADS)

    Keen, David A.; Keeble, Dean S.; Bennett, Thomas D.

    2018-04-01

    The structure of fully hydrated grossular, or katoite, contains an unusual arrangement of four O-H bonds within each O4 tetrahedra. Neutron and X-ray total scattering from a powdered deuterated sample have been measured to investigate the local arrangement of this O4D4 cluster. The O-D bond length determined directly from the pair distribution function is 0.954 Å, although the Rietveld-refined distance between average O and D positions was slightly smaller. Reverse Monte Carlo refinement of supercell models to the total scattering data show that other than the consequences of this correctly determined O-D bond length, there is little to suggest that the O4D4 structure is locally significantly different from that expected based on the average structure determined solely from Bragg diffraction.

  2. A Potential Cyclotron Resonant Scattering Feature in the Ultraluminous X-Ray Source Pulsar NGC 300 ULX1 Seen by NuSTAR and XMM-Newton

    NASA Astrophysics Data System (ADS)

    Walton, D. J.; Bachetti, M.; Fürst, F.; Barret, D.; Brightman, M.; Fabian, A. C.; Grefenstette, B. W.; Harrison, F. A.; Heida, M.; Kennea, J.; Kosec, P.; Lau, R. M.; Madsen, K. K.; Middleton, M. J.; Pinto, C.; Steiner, J. F.; Webb, N.

    2018-04-01

    Based on phase-resolved broadband spectroscopy using XMM-Newton and NuSTAR, we report on a potential cyclotron resonant scattering feature (CRSF) at E ∼ 13 keV in the pulsed spectrum of the recently discovered ultraluminous X-ray source (ULX) pulsar NGC 300 ULX1. If this interpretation is correct, the implied magnetic field of the central neutron star is B ∼ 1012 G (assuming scattering by electrons), similar to that estimated from the observed spin-up of the star, and also similar to known Galactic X-ray pulsars. We discuss the implications of this result for the connection between NGC 300 ULX1 and the other known ULX pulsars, particularly in light of the recent discovery of a likely proton cyclotron line in another ULX, M51 ULX-8.

  3. Total cross sections for electron scattering by 1-propanol at impact energies in the range 40-500 eV

    NASA Astrophysics Data System (ADS)

    da Silva, D. G. M.; Gomes, M.; Ghosh, S.; Silva, I. F. L.; Pires, W. A. D.; Jones, D. B.; Blanco, F.; Garcia, G.; Buckman, S. J.; Brunger, M. J.; Lopes, M. C. A.

    2017-11-01

    Absolute total cross section (TCS) measurements for electron scattering from 1-propanol molecules are reported for impact energies from 40 to 500 eV. These measurements were obtained using a new apparatus developed at Juiz de Fora Federal University—Brazil, which is based on the measurement of the attenuation of a collimated electron beam through a gas cell containing the molecules to be studied at a given pressure. Besides these experimental measurements, we have also calculated TCS using the Independent-Atom Model with Screening Corrected Additivity Rule and Interference (IAM-SCAR+I) approach with the level of agreement between them being typically found to be very good.

  4. A phenomenological study of photon production in low energy neutrino nucleon scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, James P; Goldman, Terry J

    2009-01-01

    Low energy photon production is an important background to many current and future precision neutrino experiments. We present a phenomenological study of t-channel radiative corrections to neutral current neutrino nucleus scattering. After introducing the relevant processes and phenomenological coupling constants, we will explore the derived energy and angular distributions as well as total cross-section predictions along with their estimated uncertainties. This is supplemented throughout with comments on possible experimental signatures and implications. We conclude with a general discussion of the analysis in the context of complimentary methodologies. This is based on a talk presented at the DPF 2009 meeting inmore » Detroit MI.« less

  5. Ground-based determination of atmospheric radiance for correction of ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Peacock, K.

    1974-01-01

    A technique is described for estimating the atmospheric radiance observed by a downward sensor (ERTS) using ground-based measurements. A formula is obtained for the sky radiance at the time of the ERTS overpass from the radiometric measurement of the sky radiance made at a particular solar zenith angle and air mass. A graph illustrates ground-based sky radiance measurements as a function of the scattering angle for a range of solar air masses. Typical values for sky radiance at a solar zenith angle of 48 degrees are given.

  6. Transient radiative transfer in a scattering slab considering polarization.

    PubMed

    Yi, Hongliang; Ben, Xun; Tan, Heping

    2013-11-04

    The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.

  7. Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited

    DOE PAGES

    Farmer, William A.; Friedman, Alex

    2015-06-18

    Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less

  8. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  9. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J; Qi, H; Wu, S

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less

  10. Method of measuring blood oxygenation based on spectroscopy of diffusely scattered light

    NASA Astrophysics Data System (ADS)

    Kleshnin, M. S.; Orlova, A. G.; Kirillin, M. Yu.; Golubyatnikov, G. Yu.; Turchin, I. V.

    2017-05-01

    A new approach to the measurement of blood oxygenation is developed and implemented, based on an original two-step algorithm reconstructing the relative concentration of biological chromophores (haemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the radiation source. The numerical experiments and approbation of the proposed approach using a biological phantom have shown the high accuracy of the reconstruction of optical properties of the object in question, as well as the possibility of correct calculation of the haemoglobin oxygenation in the presence of additive noises without calibration of the measuring device. The results of the experimental studies in animals agree with the previously published results obtained by other research groups and demonstrate the possibility of applying the developed method to the monitoring of blood oxygenation in tumour tissues.

  11. Impact on dose and image quality of a software-based scatter correction in mammography.

    PubMed

    Monserrat, Teresa; Prieto, Elena; Barbés, Benigno; Pina, Luis; Elizalde, Arlette; Fernández, Belén

    2018-06-01

    Background In 2014, Siemens developed a new software-based scatter correction (Progressive Reconstruction Intelligently Minimizing Exposure [PRIME]), enabling grid-less digital mammography. Purpose To compare doses and image quality between PRIME (grid-less) and standard (with anti-scatter grid) modes. Material and Methods Contrast-to-noise ratio (CNR) was measured for various polymethylmethacrylate (PMMA) thicknesses and dose values provided by the mammograph were recorded. CDMAM phantom images were acquired for various PMMA thicknesses and inverse Image Quality Figure (IQF inv ) was calculated. Values of incident entrance surface air kerma (ESAK) and average glandular dose (AGD) were obtained from the DICOM header for a total of 1088 pairs of clinical cases. Two experienced radiologists compared subjectively the image quality of a total of 149 pairs of clinical cases. Results CNR values were higher and doses were lower in PRIME mode for all thicknesses. IQF inv values in PRIME mode were lower for all thicknesses except for 40 mm of PMMA equivalent, in which IQF inv was slightly greater in PRIME mode. A mean reduction of 10% in ESAK and 12% in AGD in PRIME mode with respect to standard mode was obtained. The clinical image quality in PRIME and standard acquisitions resulted to be similar in most of the cases (84% for the first radiologist and 67% for the second one). Conclusion The use of PRIME software reduces, in average, the dose of radiation to the breast without affecting image quality. This reduction is greater for thinner and denser breasts.

  12. Electroweak radiative corrections for polarized Moller scattering at the future 11 GeV JLab experiment

    DOE PAGES

    Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...

    2010-11-19

    We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.

  13. Hadron mass corrections in semi-inclusive deep-inelastic scattering

    DOE PAGES

    Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...

    2015-09-24

    We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.

  14. a Phenomenological Determination of the Pion-Nucleon Scattering Lengths from Pionic Hydrogen

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Wycech, S.

    A model independent expression for the electromagnetic corrections to a phenomenological hadronic pion-nucleon (πN) scattering length ah, extracted from pionic hydrogen, is obtained. In a non-relativistic approach and using an extended charge distribution, these corrections are derived up to terms of order α2 log α in the limit of a short-range hadronic interaction. We infer ahπ ^-p=0.0870(5)m-1π which gives for the πNN coupling through the GMO relation g2π ^± pn/(4π )=14.04(17).

  15. On the role of the frozen surface approximation in small wave-height perturbation theory for moving surfaces

    NASA Astrophysics Data System (ADS)

    Keiffer, Richard; Novarini, Jorge; Scharstein, Robert

    2002-11-01

    In the standard development of the small wave-height approximation (SWHA) perturbation theory for scattering from moving rough surfaces [e.g., E. Y. Harper and F. M. Labianca, J. Acoust. Soc. Am. 58, 349-364 (1975)] the necessity for any sort of frozen surface approximation is avoided by the replacement of the rough boundary by a flat (and static) boundary. In this paper, this seemingly fortuitous byproduct of the small wave-height approximation is examined and found to fail to fully agree with an analysis based on the kinematics of the problem. Specifically, the first-order correction term from standard perturbation approach predicts a scattered amplitude that depends on the source frequency, whereas the kinematics of the problem point to a scattered amplitude that depends on the scattered frequency. It is shown that a perturbation approach in which an explicit frozen surface approximation is made before the SWHA is invoked predicts (first-order) scattered amplitudes that are in agreement with the kinematic analysis. [Work supported by ONR/NRL (PE 61153N-32) and by grants of computer time DoD HPC Shared Resource Center at Stennis Space Center, MS.

  16. Simulation of hole-mobility in doped relaxed and strained Ge layers

    NASA Astrophysics Data System (ADS)

    Watling, Jeremy R.; Riddet, Craig; Chan, Morgan Kah H.; Asenov, Asen

    2010-11-01

    As silicon based metal-oxide-semiconductor field-effect transistors (MOSFETs) are reaching the limits of their performance with scaling, alternative channel materials are being considered to maintain performance in future complementary metal-oxide semiconductor technology generations. Thus there is renewed interest in employing Ge as a channel material in p-MOSFETs, due to the significant improvement in hole mobility as compared to Si. Here we employ full-band Monte Carlo to study hole transport properties in Ge. We present mobility and velocity-field characteristics for different transport directions in p-doped relaxed and strained Ge layers. The simulations are based on a method for over-coming the potentially large dynamic range of scattering rates, which results from the long-range nature of the unscreened Coulombic interaction. Our model for ionized impurity scattering includes the affects of dynamic Lindhard screening, coupled with phase-shift, and multi-ion corrections along with plasmon scattering. We show that all these effects play a role in determining the hole carrier transport in doped Ge layers and cannot be neglected.

  17. Highly Enhanced Raman Scattering on Carbonized Polymer Films.

    PubMed

    Yoon, Jong-Chul; Hwang, Jongha; Thiyagarajan, Pradheep; Ruoff, Rodney S; Jang, Ji-Hyun

    2017-06-28

    We have discovered a carbonized polymer film to be a reliable and durable carbon-based substrate for carbon enhanced Raman scattering (CERS). Commercially available SU8 was spin coated and carbonized (c-SU8) to yield a film optimized to have a favorable Fermi level position for efficient charge transfer, which results in a significant Raman scattering enhancement under mild measurement conditions. A highly sensitive CERS (detection limit of 10 -8 M) that was uniform over a large area was achieved on a patterned c-SU8 film and the Raman signal intensity has remained constant for 2 years. This approach works not only for the CMOS-compatible c-SU8 film but for any carbonized film with the correct composition and Fermi level, as demonstrated with carbonized-PVA (poly(vinyl alcohol)) and carbonized-PVP (polyvinylpyrollidone) films. Our study certainly expands the rather narrow range of Raman-active material platforms to include robust carbon-based films readily obtained from polymer precursors. As it uses broadly applicable and cheap polymers, it could offer great advantages in the development of practical devices for chemical/bio analysis and sensors.

  18. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  19. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    NASA Astrophysics Data System (ADS)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  20. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.

  1. Simulation-based artifact correction (SBAC) for metrological computed tomography

    NASA Astrophysics Data System (ADS)

    Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc

    2017-06-01

    Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.

  2. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  3. Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.

    PubMed

    Kappadath, S Cheenu; Shaw, Chris C

    2005-11-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.

  4. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kappadath, S. Cheenu; Shaw, Chris C.

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less

  5. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  6. Environmental and Genetic Factors Explain Differences in Intraocular Scattering.

    PubMed

    Benito, Antonio; Hervella, Lucía; Tabernero, Juan; Pennos, Alexandros; Ginis, Harilaos; Sánchez-Romera, Juan F; Ordoñana, Juan R; Ruiz-Sánchez, Marcos; Marín, José M; Artal, Pablo

    2016-01-01

    To study the relative impact of genetic and environmental factors on the variability of intraocular scattering within a classical twin study. A total of 64 twin pairs, 32 monozygotic (MZ) (mean age: 54.9 ± 6.3 years) and 32 dizygotic (DZ) (mean age: 56.4 ± 7.0 years), were measured after a complete ophthalmologic exam had been performed to exclude all ocular pathologies that increase intraocular scatter as cataracts. Intraocular scattering was evaluated by using two different techniques based on a straylight parameter log(S) estimation: a compact optical instrument based in the principle of optical integration and a psychophysical measurement. Intraclass correlation coefficients (ICC) were used as descriptive statistics of twin resemblance, and genetic models were fitted to estimate heritability. No statistically significant difference was found for MZ and DZ groups for age (P = 0.203), best-corrected visual acuity (P = 0.626), cataract gradation (P = 0.701), sex (P = 0.941), optical log(S) (P = 0.386), or psychophysical log(S) (P = 0.568), with only a minor difference in equivalent sphere (P = 0.008). Intraclass correlation coefficients between siblings were similar for scatter parameters: 0.676 in MZ and 0.471 in DZ twins for optical log(S); 0.533 in MZ twins and 0.475 in DZ twins for psychophysical log(S). For equivalent sphere, ICCs were 0.767 in MZ and 0.228 in DZ twins. Conservative estimates of heritability for the measured scattering parameters were 0.39 and 0.20, respectively. Correlations of intraocular scatter (straylight) parameters in the groups of identical and nonidentical twins were similar. Heritability estimates were of limited magnitude, suggesting that genetic and environmental factors determine the variance of ocular straylight in healthy middle-aged adults.

  7. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    PubMed

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  8. Comparison of the GHSSmooth and the Rayleigh-Rice surface scatter theories

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Pfisterer, Richard N.

    2016-09-01

    The scalar-based GHSSmooth surface scatter theory results in an expression for the BRDF in terms of the surface PSD that is very similar to that provided by the rigorous Rayleigh-Rice (RR) vector perturbation theory. However it contains correction factors for two extreme situations not shared by the RR theory: (i) large incident or scattered angles that result in some portion of the scattered radiance distribution falling outside of the unit circle in direction cosine space, and (ii) the situation where the relevant rms surface roughness, σrel, is less than the total intrinsic rms roughness of the scattering surface. Also, the RR obliquity factor has been discovered to be an approximation of the more general GHSSmooth obliquity factor due to a little-known (or long-forgotten) implicit assumption in the RR theory that the surface autocovariance length is longer than the wavelength of the scattered radiation. This assumption allowed retaining only quadratic terms and lower in the series expansion for the cosine function, and results in reducing the validity of RR predictions for scattering angles greater than 60°. This inaccurate obliquity factor in the RR theory is also the cause of a complementary unrealistic "hook" at the high spatial frequency end of the predicted surface PSD when performing the inverse scattering problem. Furthermore, if we empirically substitute the polarization reflectance, Q, from the RR expression for the scalar reflectance, R, in the GHSSmooth expression, it inherits all of the polarization capabilities of the rigorous RR vector perturbation theory.

  9. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2017-01-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539

  10. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  11. Development of a microbial high-throughput screening instrument based on elastic light scatter patterns

    NASA Astrophysics Data System (ADS)

    Bae, Euiwon; Patsekin, Valery; Rajwa, Bartek; Bhunia, Arun K.; Holdman, Cheryl; Davisson, V. Jo; Hirleman, E. Daniel; Robinson, J. Paul

    2012-04-01

    A microbial high-throughput screening (HTS) system was developed that enabled high-speed combinatorial studies directly on bacterial colonies. The system consists of a forward scatterometer for elastic light scatter (ELS) detection, a plate transporter for sample handling, and a robotic incubator for automatic incubation. To minimize the ELS pattern-capturing time, a new calibration plate and correction algorithms were both designed, which dramatically reduced correction steps during acquisition of the circularly symmetric ELS patterns. Integration of three different control software programs was implemented, and the performance of the system was demonstrated with single-species detection for library generation and with time-resolved measurement for understanding ELS colony growth correlation, using Escherichia coli and Listeria. An in-house colony-tracking module enabled researchers to easily understand the time-dependent variation of the ELS from identical colony, which enabled further analysis in other biochemical experiments. The microbial HTS system provided an average scan time of 4.9 s per colony and the capability of automatically collecting more than 4000 ELS patterns within a 7-h time span.

  12. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  13. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  14. Establishment of a Photon Data Section of the BNL National Nuclear Data Center: A preliminary proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, A.L.; Pearlstein, S.

    1992-05-01

    It is proposed to establish a Photon Data Section (PDS) of the BNL National Nuclear Data Center (NNDC). This would be a total program encompassing both photon-atom and photon-nucleus interactions. By utilizing the existing NNDC data base management expertise and on-line access capabilities, the implementation of photon interaction data activities within the existing NNDC nuclear structure and nuclear-reaction activities can reestablish a viable photon interaction data program at minimum cost. By taking advantage of the on-line capabilities, the x-ray users' community will have access to a dynamic, state-of-the-art data base of interaction information. The proposed information base would include datamore » that presently are scattered throughout the literature usually in tabulated form. It is expected that the data bases would include at least the most precise data available in photoelectric cross sections, atomic form factors and incoherent scattering functions, anomalous scattering factors, oscillator strengths and oscillator densities, fluorescence yields, Auger electron yields, etc. It could also include information not presently available in tabulations or in existing data bases such as EXAFS (extended x-ray absorption fine structure) reference spectra, chemical bonding induced shifts in the photoelectric absorption edge, matrix corrections, x-ray Raman, and x-ray resonant Raman cross sections. The data base will also include the best estimates of the accuracy of the interaction data as it exists in the data base. It is proposed that the PDS would support computer programs written for calculating scattering cross sections for given solid angles, sample geometries, and polarization of incident x-rays, for calculating Compton profiles, and for analyzing data as in EXAFS and x-ray fluorescence.« less

  15. Establishment of a Photon Data Section of the BNL National Nuclear Data Center: A preliminary proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, A.L.; Pearlstein, S.

    1992-05-01

    It is proposed to establish a Photon Data Section (PDS) of the BNL National Nuclear Data Center (NNDC). This would be a total program encompassing both photon-atom and photon-nucleus interactions. By utilizing the existing NNDC data base management expertise and on-line access capabilities, the implementation of photon interaction data activities within the existing NNDC nuclear structure and nuclear-reaction activities can reestablish a viable photon interaction data program at minimum cost. By taking advantage of the on-line capabilities, the x-ray users` community will have access to a dynamic, state-of-the-art data base of interaction information. The proposed information base would include datamore » that presently are scattered throughout the literature usually in tabulated form. It is expected that the data bases would include at least the most precise data available in photoelectric cross sections, atomic form factors and incoherent scattering functions, anomalous scattering factors, oscillator strengths and oscillator densities, fluorescence yields, Auger electron yields, etc. It could also include information not presently available in tabulations or in existing data bases such as EXAFS (extended x-ray absorption fine structure) reference spectra, chemical bonding induced shifts in the photoelectric absorption edge, matrix corrections, x-ray Raman, and x-ray resonant Raman cross sections. The data base will also include the best estimates of the accuracy of the interaction data as it exists in the data base. It is proposed that the PDS would support computer programs written for calculating scattering cross sections for given solid angles, sample geometries, and polarization of incident x-rays, for calculating Compton profiles, and for analyzing data as in EXAFS and x-ray fluorescence.« less

  16. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  17. Magnetic Field Effects on the Fluctuation Corrections to the Sound Attenuation in Liquid ^3He

    NASA Astrophysics Data System (ADS)

    Zhao, Erhai; Sauls, James A.

    2002-03-01

    We investigated the effect of a magnetic field on the excess sound attenuation due to order parameter fluctuations in bulk liquid ^3He and liquid ^3He in aerogel for temperatures just above the corresponding superfluid transition temperatures. The fluctuation corrections to the acoustic attenuation are sensitive to magnetic field pairbreaking, aerogel scattering as well as the spin correlations of fluctuating pairs. Calculations of the corrections to the zero sound velocity, δ c_0, and attenuation, δα_0, are carried out in the ladder approximation for the singular part of the quasiparticle-quasiparticle scattering amplitude(V. Samalam and J. W. Serene, Phys. Rev. Lett. \\underline41), 497 (1978). as a function of frequency, temperature, impurity scattering and magnetic field strength. The magnetic field suppresses the fluctuation contributions to the attenuation of zero sound. With increasing magnetic field the temperature dependence of δα_0(t) crosses over from δα_0(t) ~√ t to δα_0(t) ~ t, where t=T/Tc -1 is the reduced temperature.

  18. Author Correction: Induced unconventional superconductivity on the surface states of Bi2Te3 topological insulator.

    PubMed

    Charpentier, Sophie; Galletti, Luca; Kunakova, Gunta; Arpaia, Riccardo; Song, Yuxin; Baghdadi, Reza; Wang, Shu Min; Kalaboukhov, Alexei; Olsson, Eva; Tafuri, Francesco; Golubev, Dmitry; Linder, Jacob; Bauch, Thilo; Lombardi, Floriana

    2018-01-30

    The original version of this Article contained an error in Fig. 6b. In the top scattering process, while the positioning of both arrows was correct, the colours were switched: the first arrow was red and the second arrow was blue, rather than the correct order of blue then red.

  19. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less

  20. XUV and x-ray elastic scattering of attosecond electromagnetic pulses on atoms

    NASA Astrophysics Data System (ADS)

    Rosmej, F. B.; Astapenko, V. A.; Lisitsa, V. S.

    2017-12-01

    Elastic scattering of electromagnetic pulses on atoms in XUV and soft x-ray ranges is considered for ultra-short pulses. The inclusion of the retardation term, non-dipole interaction and an efficient scattering tensor approximation allowed studying the scattering probability in dependence of the pulse duration for different carrier frequencies. Numerical calculations carried out for Mg, Al and Fe atoms demonstrate that the scattering probability is a highly nonlinear function of the pulse duration and has extrema for pulse carrier frequencies in the vicinity of the resonance-like features of the polarization charge spectrum. Closed expressions for the non-dipole correction and the angular dependence of the scattered radiation are obtained.

  1. Assessing the measurement of aerosol single scattering albedo by Cavity Attenuated Phase-Shift Single Scattering Monitor (CAPS PMssa)

    NASA Astrophysics Data System (ADS)

    Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas

    2016-04-01

    The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient measurement from the CAPS PM_{ssa (calculated as the difference from the measured extinction and scattering). The study was carried out in the laboratory with controlled particle generation systems. We used both light absorbing aerosols (Regal 400R pigment black from Cabot Corp. and colloidal graphite - Aquadag - from Agar Scientific) and purely scattering aerosols (ammonium sulphate and polystyrene latex spheres), covering single scattering albedo values from approximately 0.4 to 1.0. A new truncation angle correction for the CAPS PM_{ssa integrated sphere is proposed.

  2. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  3. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE PAGES

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...

    2016-03-01

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  4. Heavy-quark production in massless quark scattering at two loops in QCD

    NASA Astrophysics Data System (ADS)

    Czakon, M.; Mitov, A.; Moch, S.

    2007-07-01

    We present the two-loop virtual QCD corrections to the production of heavy quarks in the quark-anti-quark annihilation channel in the limit when all kinematical invariants are large compared to the mass of the heavy quark. Our result is exact up to terms suppressed by powers of the heavy-quark mass. The derivation is based on a simple relation between massless and massive scattering amplitudes in gauge theories proposed recently by two of the authors as well as a direct calculation of the massive amplitude at two loops. The results presented here form an important part of the next-to-next-to-leading order QCD contributions to heavy-quark production in hadron-hadron collisions.

  5. Forward multiple scattering corrections as function of detector field of view

    NASA Astrophysics Data System (ADS)

    Zardecki, A.; Deepak, A.

    1983-06-01

    The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.

  6. CORRECTING FOR INTERPLANETARY SCATTERING IN VELOCITY DISPERSION ANALYSIS OF SOLAR ENERGETIC PARTICLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.

    2015-06-10

    To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energeticmore » protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  8. Measurements of Nascent Soot Using a Cavity Attenauted Phase Shift (CAPS)-based Single Scattering Albedo Monitor

    NASA Astrophysics Data System (ADS)

    Freedman, A.; Onasch, T. B.; Renbaum-Wollf, L.; Lambe, A. T.; Davidovits, P.; Kebabian, P. L.

    2015-12-01

    Accurate, as compared to precise, measurement of aerosol absorption has always posed a significant problem for the particle radiative properties community. Filter-based instruments do not actually measure absorption but rather light transmission through the filter; absorption must be derived from this data using multiple corrections. The potential for matrix-induced effects is also great for organic-laden aerosols. The introduction of true in situ measurement instruments using photoacoustic or photothermal interferometric techniques represents a significant advance in the state-of-the-art. However, measurement artifacts caused by changes in humidity still represent a significant hurdle as does the lack of a good calibration standard at most measurement wavelengths. And, in the absence of any particle-based absorption standard, there is no way to demonstrate any real level of accuracy. We, along with others, have proposed that under the circumstance of low single scattering albedo (SSA), absorption is best determined by difference using measurement of total extinction and scattering. We discuss a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement of non-absorbing particles. For small particles and low SSA, absorption can be measured with an accuracy of 6-8% at absorption levels as low as a few Mm-1. We present new results of the measurement of the mass absorption coefficient (MAC) of soot generated by an inverted methane diffusion flame at 630 nm. A value of 6.60 ±0.2 m2 g-1 was determined where the uncertainty refers to the precision of the measurement. The overall accuracy of the measurement, traceable to the properties of polystyrene latex particles, is estimated to be better than ±10%.

  9. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  10. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    NASA Astrophysics Data System (ADS)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  11. Nuclear fragmentation energy and momentum transfer distributions in relativistic heavy-ion collisions

    NASA Technical Reports Server (NTRS)

    Khandelwal, Govind S.; Khan, Ferdous

    1989-01-01

    An optical model description of energy and momentum transfer in relativistic heavy-ion collisions, based upon composite particle multiple scattering theory, is presented. Transverse and longitudinal momentum transfers to the projectile are shown to arise from the real and absorptive part of the optical potential, respectively. Comparisons of fragment momentum distribution observables with experiments are made and trends outlined based on our knowledge of the underlying nucleon-nucleon interaction. Corrections to the above calculations are discussed. Finally, use of the model as a tool for estimating collision impact parameters is indicated.

  12. Role of Minerogenic Particles in Light Scattering in Lakes and a River in Central New York

    DTIC Science & Technology

    2007-09-10

    calibration protocol. Corrections for differences in the ten samples for both elemental and morphometric pure-water absorption and attenuation due to tem...PA’>, (6) Morphometric characterization of particles by SAX is based on a "rotating chord" algorithm, which pro- where N,,, is the number of...to characterize individual minerogenic par- nous versus autochthonous) is essential information ticles both compositionally and morphometrically for

  13. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    DOE PAGES

    Moteabbed, Maryam; Niroula, Megh; Raue, Brian A.; ...

    2013-08-30

    The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has themore » potential to make precise measurements over a broad range in Q 2 and scattering angles. We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q 2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for < Q 2 >=0.206 GeV 2 and 0.830 ≤ ε ≤ 0.943. We have demonstrated that the tertiary e ± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q 2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e + and e - beams.« less

  14. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    NASA Astrophysics Data System (ADS)

    Moteabbed, M.; Niroula, M.; Raue, B. A.; Weinstein, L. B.; Adikaram, D.; Arrington, J.; Brooks, W. K.; Lachniet, J.; Rimal, Dipak; Ungaro, M.; Afanasev, A.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Kim, A.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lewis, S.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Martinez, D.; Mayer, M.; McKinnon, B.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moriya, K.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S.; Strauch, S.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.

    2013-08-01

    Background: The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections.Purpose: The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has the potential to make precise measurements over a broad range in Q2 and scattering angles.Methods: We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton.Results: The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for =0.206 GeV2 and 0.830⩽ɛ⩽0.943.Conclusions: We have demonstrated that the tertiary e± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e+ and e- beams.

  15. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  16. Theory of thermal conductivity in the disordered electron liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwiete, G., E-mail: schwiete@uni-mainz.de; Finkel’stein, A. M.

    2016-03-15

    We study thermal conductivity in the disordered two-dimensional electron liquid in the presence of long-range Coulomb interactions. We describe a microscopic analysis of the problem using the partition function defined on the Keldysh contour as a starting point. We extend the renormalization group (RG) analysis developed for thermal transport in the disordered Fermi liquid and include scattering processes induced by the long-range Coulomb interaction in the sub-temperature energy range. For the thermal conductivity, unlike for the electrical conductivity, these scattering processes yield a logarithmic correction that may compete with the RG corrections. The interest in this correction arises from themore » fact that it violates the Wiedemann–Franz law. We checked that the sub-temperature correction to the thermal conductivity is not modified either by the inclusion of Fermi liquid interaction amplitudes or as a result of the RG flow. We therefore expect that the answer obtained for this correction is final. We use the theory to describe thermal transport on the metallic side of the metal–insulator transition in Si MOSFETs.« less

  17. ARGOS: the laser guide star system for the LBT

    NASA Astrophysics Data System (ADS)

    Rabien, S.; Ageorges, N.; Barl, L.; Beckmann, U.; Blümchen, T.; Bonaglia, M.; Borelli, J. L.; Brynnel, J.; Busoni, L.; Carbonaro, L.; Davies, R.; Deysenroth, M.; Durney, O.; Elberich, M.; Esposito, S.; Gasho, V.; Gässler, W.; Gemperlein, H.; Genzel, R.; Green, R.; Haug, M.; Hart, M. L.; Hubbard, P.; Kanneganti, S.; Masciadri, E.; Noenickx, J.; Orban de Xivry, G.; Peter, D.; Quirrenbach, A.; Rademacher, M.; Rix, H. W.; Salinari, P.; Schwab, C.; Storm, J.; Strüder, L.; Thiel, M.; Weigelt, G.; Ziegleder, J.

    2010-07-01

    ARGOS is the Laser Guide Star adaptive optics system for the Large Binocular Telescope. Aiming for a wide field adaptive optics correction, ARGOS will equip both sides of LBT with a multi laser beacon system and corresponding wavefront sensors, driving LBT's adaptive secondary mirrors. Utilizing high power pulsed green lasers the artificial beacons are generated via Rayleigh scattering in earth's atmosphere. ARGOS will project a set of three guide stars above each of LBT's mirrors in a wide constellation. The returning scattered light, sensitive particular to the turbulence close to ground, is detected in a gated wavefront sensor system. Measuring and correcting the ground layers of the optical distortions enables ARGOS to achieve a correction over a very wide field of view. Taking advantage of this wide field correction, the science that can be done with the multi object spectrographs LUCIFER will be boosted by higher spatial resolution and strongly enhanced flux for spectroscopy. Apart from the wide field correction ARGOS delivers in its ground layer mode, we foresee a diffraction limited operation with a hybrid Sodium laser Rayleigh beacon combination.

  18. Qualitative and quantitative processing of side-scan sonar data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwan, F.S.; Anderson, A.L.; Hilde, T.W.C.

    1990-06-01

    Modern side-scan sonar systems allow vast areas of seafloor to be rapidly imaged and quantitatively mapped in detail. The application of remote sensing image processing techniques can be used to correct for various distortions inherent in raw sonography. Corrections are possible for water column, slant-range, aspect ratio, speckle and striping noise, multiple returns, power drop-off, and for georeferencing. The final products reveal seafloor features and patterns that are geometrically correct, georeferenced, and have improved signal/noise ratio. These products can be merged with other georeferenced data bases for further database management and information extraction. In order to compare data collected bymore » different systems from a common area and to ground truth measurements and geoacoustic models, quantitative correction must be made for calibrated sonar system and bathymetry effects. Such data inversion must account for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area, and grazing angle effects. Seafloor classification can then be performed on the calculated back-scattering strength using Lambert's Law and regression analysis. Examples are given using both approaches: image analysis and inversion of data based on the sonar equation.« less

  19. WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Z; Swanson, T; O’Connor, M

    2015-06-15

    Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less

  20. Theory of bright-field scanning transmission electron microscopy for tomography

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.

    2005-02-01

    Radiation transport theory is applied to electron microscopy of samples composed of one or more materials. The theory, originally due to Goudsmit and Saunderson, assumes only elastic scattering and an amorphous medium dominated by atomic interactions. For samples composed of a single material, the theory yields reasonable parameter-free agreement with experimental data taken from the literature for the multiple scattering of 300-keV electrons through aluminum foils up to 25μm thick. For thin films, the theory gives a validity condition for Beer's law. For thick films, a variant of Molière's theory [V. G. Molière, Z. Naturforschg. 3a, 78 (1948)] of multiple scattering leads to a form for the bright-field signal for foils in the multiple-scattering regime. The signal varies as [tln(e1-2γt/τ)]-1 where t is the path length of the beam, τ is the mean free path for elastic scattering, and γ is Euler's constant. The Goudsmit-Saunderson solution interpolates numerically between these two limits. For samples with multiple materials, elemental sensitivity is developed through the angular dependence of the scattering. From the elastic scattering cross sections of the first 92 elements, a singular-value decomposition of a vector space spanned by the elastic scattering cross sections minus a delta function shows that there is a dominant common mode, with composition-dependent corrections of about 2%. A mathematically correct reconstruction procedure beyond 2% accuracy requires the acquisition of the bright-field signal as a function of the scattering angle. Tomographic reconstructions are carried out for three singular vectors of a sample problem with four elements Cr, Cu, Zr, and Te. The three reconstructions are presented jointly as a color image; all four elements are clearly identifiable throughout the image.

  1. Optical artefact characterization and correction in volumetric scintillation dosimetry

    PubMed Central

    Robertson, Daniel; Hui, Cheukkai; Archambault, Louis; Mohan, Radhe; Beddar, Sam

    2014-01-01

    The goals of this study were (1) to characterize the optical artefacts affecting measurement accuracy in a volumetric liquid scintillation detector, and (2) to develop methods to correct for these artefacts. The optical artefacts addressed were photon scattering, refraction, camera perspective, vignetting, lens distortion, the lens point spread function, stray radiation, and noise in the camera. These artefacts were evaluated by theoretical and experimental means, and specific correction strategies were developed for each artefact. The effectiveness of the correction methods was evaluated by comparing raw and corrected images of the scintillation light from proton pencil beams against validated Monte Carlo calculations. Blurring due to the lens and refraction at the scintillator tank-air interface were found to have the largest effect on the measured light distribution, and lens aberrations and vignetting were important primarily at the image edges. Photon scatter in the scintillator was not found to be a significant source of artefacts. The correction methods effectively mitigated the artefacts, increasing the average gamma analysis pass rate from 66% to 98% for gamma criteria of 2% dose difference and 2 mm distance to agreement. We conclude that optical artefacts cause clinically meaningful errors in the measured light distribution, and we have demonstrated effective strategies for correcting these optical artefacts. PMID:24321820

  2. Modeling ultrasonic transient scattering from biological tissues including their dispersive properties directly in the time domain.

    PubMed

    Norton, G V; Novarini, J C

    2007-06-01

    Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.

  3. Correction for reflected sky radiance in low-altitude coastal hyperspectral images.

    PubMed

    Kim, Minsu; Park, Joong Yong; Kopilevich, Yuri; Tuell, Grady; Philpot, William

    2013-11-10

    Low-altitude coastal hyperspectral imagery is sensitive to reflections of sky radiance at the water surface. Even in the absence of sun glint, and for a calm water surface, the wide range of viewing angles may result in pronounced, low-frequency variations of the reflected sky radiance across the scan line depending on the solar position. The variation in reflected sky radiance can be obscured by strong high-spatial-frequency sun glint and at high altitude by path radiance. However, at low altitudes, the low-spatial-frequency sky radiance effect is frequently significant and is not removed effectively by the typical corrections for sun glint. The reflected sky radiance from the water surface observed by a low-altitude sensor can be modeled in the first approximation as the sum of multiple-scattered Rayleigh path radiance and the single-scattered direct-solar-beam radiance by the aerosol in the lower atmosphere. The path radiance from zenith to the half field of view (FOV) of a typical airborne spectroradiometer has relatively minimal variation and its reflected radiance to detector array results in a flat base. Therefore the along-track variation is mostly contributed by the forward single-scattered solar-beam radiance. The scattered solar-beam radiances arrive at the water surface with different incident angles. Thus the reflected radiance received at the detector array corresponds to a certain scattering angle, and its variation is most effectively parameterized using the downward scattering angle (DSA) of the solar beam. Computation of the DSA must account for the roll, pitch, and heading of the platform and the viewing geometry of the sensor along with the solar ephemeris. Once the DSA image is calculated, the near-infrared (NIR) radiance from selected water scan lines are compared, and a relationship between DSA and NIR radiance is derived. We then apply the relationship to the entire DSA image to create an NIR reference image. Using the NIR reference image and an atmospheric spectral reflectance look-up table, the low spatial frequency variation of the water surface-reflected atmospheric contribution is removed.

  4. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  5. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  6. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  7. Probing the Interstellar Dust towards the Galactic Centre using X-ray Dust Scattering Halos

    NASA Astrophysics Data System (ADS)

    Jin, C.; Ponti, G.; Haberl, F.; Smith, R.

    2017-10-01

    Dust scattering creates an X-ray halo that contains abundant information about the interstellar dust along the source's line-of-sight (LOS), and is most prominent when the LOS nH is high. In this talk, I will present results from our latest study of a bright dust scattering halo around an eclipsing X-ray binary at 1.45 arcmin away from Sgr A*, namely AX J1745.6-2901. This study is based on a large set of XMM-Newton and Chandra observations, and is so-far the best dust scattering halo study of a X-ray transient in the Galactic centre (GC). I will show that the foreground dust of AX J1745.6-2901 can be decomposed into two major thick dust layers. One layer contains (66-81)% of the total LOS dust and is several kpc away from the source, and so is most likely to reside in the Galactic disc. The other layer is local to the source. I will also show that the dust scattering halo can cause the source spectrum to severely depend on the source extraction region. Such spectral bias can be corrected by our new Xspec model, which is likely to be applicable to Sgr A* and other GC sources as well.

  8. Parallelized Monte Carlo software to efficiently simulate the light propagation in arbitrarily shaped objects and aligned scattering media.

    PubMed

    Zoller, Christian Johannes; Hohmann, Ansgar; Foschum, Florian; Geiger, Simeon; Geiger, Martin; Ertl, Thomas Peter; Kienle, Alwin

    2018-06-01

    A GPU-based Monte Carlo software (MCtet) was developed to calculate the light propagation in arbitrarily shaped objects, like a human tooth, represented by a tetrahedral mesh. A unique feature of MCtet is a concept to realize different kinds of light-sources illuminating the complex-shaped surface of an object, for which no preprocessing step is needed. With this concept, it is also possible to consider photons leaving a turbid media and reentering again in case of a concave object. The correct implementation was shown by comparison with five other Monte Carlo software packages. A hundredfold acceleration compared with central processing units-based programs was found. MCtet can simulate anisotropic light propagation, e.g., by accounting for scattering at cylindrical structures. The important influence of the anisotropic light propagation, caused, e.g., by the tubules in human dentin, is shown for the transmission spectrum through a tooth. It was found that the sensitivity to a change in the oxygen saturation inside the pulp for transmission spectra is much larger if the tubules are considered. Another "light guiding" effect based on a combination of a low scattering and a high refractive index in enamel is described. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1992-01-01

    An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.

  10. Semimicroscopic analysis of 6Li+28Si elastic scattering at 76 to 318 MeV

    NASA Astrophysics Data System (ADS)

    Hassanain, M. A.; Anwar, M.; Behairy, Kassem O.

    2018-04-01

    Using the α-cluster structure of colliding nuclei, the elastic scattering of 6Li+28Si at energies from 76 to 318 MeV has been investigated by the use of the real folding cluster approach. The results of the cluster analysis are compared with those obtained by the CDM3Y6 effective density- and energy-dependent nucleon-nucleon (NN) interaction based upon G -matrix elements of the M3Y-Paris potential. A Woods-Saxon (WS) form was used for the imaginary potential. For all energies and derived potentials, the diffraction region was well reproduced, except at Elab=135 and 154 MeV at large angle. These results suggest that the addition of the surface (DWS) imaginary potential term to the volume imaginary potential is essential for a correct description of the refractive structure of the 6Li elastic scattering distribution at these energies. The energy dependence of the total reaction cross sections and that of the real and imaginary volume integrals is also discussed.

  11. Experimental comparison between performance of the PM and LPM methods in computed radiography

    NASA Astrophysics Data System (ADS)

    Kermani, Aboutaleb; Feghhi, Seyed Amir Hossein; Rokrok, Behrouz

    2018-07-01

    The scatter downgrades the image quality and reduces its information efficiency in quantitative measurement usages when creating projections with ionizing radiation. Therefore, the variety of methods have been applied for scatter reduction and correction of the undesirable effects. As new approaches, the ordinary and localized primary modulation methods have already been used individually through experiments and simulations in medical and industrial computed tomography, respectively. The aim of this study is the evaluation of capabilities and limitations of these methods in comparison with each other. For this mean, the ordinary primary modulation has been implemented in computed radiography for the first time and the potential of both methods has been assessed in thickness measurement as well as scatter to primary signal ratio determination. The comparison results, based on the experimental outputs which obtained using aluminum specimens and continuous X-ray spectra, are to the benefit of the localized primary modulation method because of improved accuracy and higher performance especially at the edges.

  12. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  13. Forward scattering effects on muon imaging

    NASA Astrophysics Data System (ADS)

    Gómez, H.; Gibert, D.; Goy, C.; Jourde, K.; Karyotakis, Y.; Katsanevas, S.; Marteau, J.; Rosas-Carbajal, M.; Tonazzo, A.

    2017-12-01

    Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure.

  14. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  15. Measurement of event shape variables in deep inelastic e p scattering

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Beck, H. P.; Beck, M.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Botterweck, F.; Boudry, V.; Bourov, S.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Dollfus, C.; Donovan, K. T.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; İşsever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Rick, H.; Reiss, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stößlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vandenplas, D.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wollatz, H.; Wünsch, E.; ŽáČek, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.

    1997-02-01

    Deep inelastic e p scattering data, taken with the H1 detector at HERA, are used to study the event shape variables thrust, jet broadening and jet mass in the current hemisphere of the Breit frame over a large range of momentum transfers Q between 7 GeV and 100 GeV. The data are compared with results from e+e- experiments. Using second order QCD calculations and an approach to relate hadronisation effects to power corrections an analysis of the Q dependences of the means of the event shape parameters is presented, from which both the power corrections and the strong coupling constant are determined without any assumption on fragmentation models. The power corrections of all event shape variables investigated follow a 1/Q behaviour and can be described by a common parameter α0.

  16. Role of oceanic air bubbles in atmospheric correction of ocean color imagery.

    PubMed

    Yan, Banghua; Chen, Bingquan; Stamnes, Knut

    2002-04-20

    Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere--ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.

  17. Role of oceanic air bubbles in atmospheric correction of ocean color imagery

    NASA Astrophysics Data System (ADS)

    Yan, Banghua; Chen, Bingquan; Stamnes, Knut

    2002-04-01

    Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere-ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.

  18. Mask patterning process using the negative tone chemically amplified resist TOK OEBR-CAN024

    NASA Astrophysics Data System (ADS)

    Irmscher, Mathias; Beyer, Dirk; Butschke, Joerg; Hudek, Peter; Koepernik, Corinna; Plumhoff, Jason; Rausa, Emmanuel; Sato, Mitsuru; Voehringer, Peter

    2004-08-01

    Optimized process parameters using the TOK OEBR-CAN024 resist for high chrome load patterning have been determined. A tight linearity tolerance for opaque and clear features, independent on the local pattern density, was the goal of our process integration work. For this purpose we evaluated a new correction method taking into account electron scattering and process influences. The method is based on matching of measured pattern geometry by iterative back-simulation using multiple Gauss and/or exponential functions. The obtained control function acts as input for the proximity correction software PROXECCO. Approaches with different pattern oversize and two Cr thicknesses were accomplished and the results have been reported. Isolated opaque and clear lines could be realized in a very tight linearity range. The increasing line width of small dense lines, induced by the etching process, could be corrected only partially.

  19. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  20. Development of a Geomagnetic Storm Correction to the International Reference Ionosphere E-Region Electron Densities Using TIMED/SABER Observations

    NASA Technical Reports Server (NTRS)

    Mertens, C. J.; Xu, X.; Fernandez, J. R.; Bilitza, D.; Russell, J. M., III; Mlynczak, M. G.

    2009-01-01

    Auroral infrared emission observed from the TIMED/SABER broadband 4.3 micron channel is used to develop an empirical geomagnetic storm correction to the International Reference Ionosphere (IRI) E-region electron densities. The observation-based proxy used to develop the storm model is SABER-derived NO+(v) 4.3 micron volume emission rates (VER). A correction factor is defined as the ratio of storm-time NO+(v) 4.3 micron VER to a quiet-time climatological averaged NO+(v) 4.3 micron VER, which is linearly fit to available geomagnetic activity indices. The initial version of the E-region storm model, called STORM-E, is most applicable within the auroral oval region. The STORM-E predictions of E-region electron densities are compared to incoherent scatter radar electron density measurements during the Halloween 2003 storm events. Future STORM-E updates will extend the model outside the auroral oval.

  1. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  2. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer

  3. Spaceborne lidar for cloud monitoring

    NASA Astrophysics Data System (ADS)

    Werner, Christian; Krichbaumer, W.; Matvienko, Gennadii G.

    1994-12-01

    Results of laser cloud top measurements taken from space in 1982 (called PANTHER) are presented. Three sequences of land, water, and cloud data are selected. A comparison with airborne lidar data shows similarities. Using the single scattering lidar equation for these spaceborne lidar measurements one can misinterpret the data if one doesn't correct for multiple scattering.

  4. Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization

    NASA Astrophysics Data System (ADS)

    Xu, J.; Sisniega, A.; Zbijewski, W.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-04-01

    Detection of acute intracranial hemorrhage (ICH) is important for diagnosis and treatment of traumatic brain injury, stroke, postoperative bleeding, and other head and neck injuries. This paper details the design and development of a cone-beam CT (CBCT) system developed specifically for the detection of low-contrast ICH in a form suitable for application at the point of care. Recognizing such a low-contrast imaging task to be a major challenge in CBCT, the system design began with a rigorous analysis of task-based detectability including critical aspects of system geometry, hardware configuration, and artifact correction. The imaging performance model described the three-dimensional (3D) noise-equivalent quanta using a cascaded systems model that included the effects of scatter, scatter correction, hardware considerations of complementary metal-oxide semiconductor (CMOS) and flat-panel detectors (FPDs), and digitization bit depth. The performance was analyzed with respect to a low-contrast (40-80 HU), medium-frequency task representing acute ICH detection. The task-based detectability index was computed using a non-prewhitening observer model. The optimization was performed with respect to four major design considerations: (1) system geometry (including source-to-detector distance (SDD) and source-to-axis distance (SAD)); (2) factors related to the x-ray source (including focal spot size, kVp, dose, and tube power); (3) scatter correction and selection of an antiscatter grid; and (4) x-ray detector configuration (including pixel size, additive electronics noise, field of view (FOV), and frame rate, including both CMOS and a-Si:H FPDs). Optimal design choices were also considered with respect to practical constraints and available hardware components. The model was verified in comparison to measurements on a CBCT imaging bench as a function of the numerous design parameters mentioned above. An extended geometry (SAD  =  750 mm, SDD  =  1100 mm) was found to be advantageous in terms of patient dose (20 mGy) and scatter reduction, while a more isocentric configuration (SAD  =  550 mm, SDD  =  1000 mm) was found to give a more compact and mechanically favorable configuration with minor tradeoff in detectability. An x-ray source with a 0.6 mm focal spot size provided the best compromise between spatial resolution requirements and x-ray tube power. Use of a modest anti-scatter grid (8:1 GR) at a 20 mGy dose provided slight improvement (~5-10%) in the detectability index, but the benefit was lost at reduced dose. The potential advantages of CMOS detectors over FPDs were quantified, showing that both detectors provided sufficient spatial resolution for ICH detection, while the former provided a potentially superior low-dose performance, and the latter provided the requisite FOV for volumetric imaging in a centered-detector geometry. Task-based imaging performance modeling provides an important starting point for CBCT system design, especially for the challenging task of ICH detection, which is somewhat beyond the capabilities of existing CBCT platforms. The model identifies important tradeoffs in system geometry and hardware configuration, and it supports the development of a dedicated CBCT system for point-of-care application. A prototype suitable for clinical studies is in development based on this analysis.

  5. Subleading Regge limit from a soft anomalous dimension

    NASA Astrophysics Data System (ADS)

    Brüser, Robin; Caron-Huot, Simon; Henn, Johannes M.

    2018-04-01

    Wilson lines capture important features of scattering amplitudes, for example soft effects relevant for infrared divergences, and the Regge limit. Beyond the leading power approximation, corrections to the eikonal picture have to be taken into account. In this paper, we study such corrections in a model of massive scattering amplitudes in N=4 super Yang-Mills, in the planar limit, where the mass is generated through a Higgs mechanism. Using known three-loop analytic expressions for the scattering amplitude, we find that the first power suppressed term has a very simple form, equal to a single power law. We propose that its exponent is governed by the anomalous dimension of a Wilson loop with a scalar inserted at the cusp, and we provide perturbative evidence for this proposal. We also analyze other limits of the amplitude and conjecture an exact formula for a total cross-section at high energies.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeylikovich, I.; Xu, M., E-mail: mxu@fairfield.edu

    The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyzemore » theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.« less

  7. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  8. Dynamic coherent backscattering mirror

    NASA Astrophysics Data System (ADS)

    Zeylikovich, I.; Xu, M.

    2016-02-01

    The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyze theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.

  9. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  10. The impact of vibrational Raman scattering of air on DOAS measurements of atmospheric trace gases

    NASA Astrophysics Data System (ADS)

    Lampel, J.; Frieß, U.; Platt, U.

    2015-09-01

    In remote sensing applications, such as differential optical absorption spectroscopy (DOAS), atmospheric scattering processes need to be considered. After inelastic scattering on N2 and O2 molecules, the scattered photons occur as additional intensity at a different wavelength, effectively leading to "filling-in" of both solar Fraunhofer lines and absorptions of atmospheric constituents, if the inelastic scattering happens after the absorption. Measured spectra in passive DOAS applications are typically corrected for rotational Raman scattering (RRS), also called Ring effect, which represents the main contribution to inelastic scattering. Inelastic scattering can also occur in liquid water, and its influence on DOAS measurements has been observed over clear ocean water. In contrast to that, vibrational Raman scattering (VRS) of N2 and O2 has often been thought to be negligible, but it also contributes. Consequences of VRS are red-shifted Fraunhofer structures in scattered light spectra and filling-in of Fraunhofer lines, additional to RRS. At 393 nm, the spectral shift is 25 and 40 nm for VRS of O2 and N2, respectively. We describe how to calculate VRS correction spectra according to the Ring spectrum. We use the VRS correction spectra in the spectral range of 420-440 nm to determine the relative magnitude of the cross-sections of VRS of O2 and N2 and RRS of air. The effect of VRS is shown for the first time in spectral evaluations of Multi-Axis DOAS data from the SOPRAN M91 campaign and the MAD-CAT MAX-DOAS intercomparison campaign. The measurements yield in agreement with calculated scattering cross-sections that the observed VRS(N2) cross-section at 393 nm amounts to 2.3 ± 0.4 % of the cross-section of RRS at 433 nm under tropospheric conditions. The contribution of VRS(O2) is also found to be in agreement with calculated scattering cross-sections. It is concluded, that this phenomenon has to be included in the spectral evaluation of weak absorbers as it reduces the measurement error significantly and can cause apparent differential optical depth of up to 3 ×10-4. Its influence on the spectral retrieval of IO, glyoxal, water vapour and NO2 in the blue wavelength range is evaluated for M91. For measurements with a large Ring signal a significant and systematic bias of NO2 dSCDs (differential slant column densities) up to (-3.8 ± 0.4) × 1014 molec cm-2 is observed if this effect is not considered. The effect is typically negligible for DOAS fits with an RMS (root mean square) larger than 4 × 10-4.

  11. Noninvasive OCT imaging of the retinal morphology and microvasculature based on the combination of the phase and amplitude method

    NASA Astrophysics Data System (ADS)

    Qin, Lin; Fan, Shanhui; Zhou, Chuanqing

    2017-04-01

    To implement the optical coherence tomography (OCT) angiography on the low scanning speed OCT system, we developed a joint phase and amplitude method to generate 3-D angiograms by analysing the frequency distribution of signals from non-moving and moving scatterers and separating the signals from the tissue and blood flow with high-pass filter dynamically. This approach firstly compensates the sample motion between adjacent A-lines. Then according to the corrected phase information, we used a histogram method to determine the bulk non-moving tissue phases dynamically, which is regarded as the cut-off frequency of a high-pass filter, and separated the moving and non-moving scatters using the mentioned high-pass filter. The reconstructed image can visualize the components of moving scatters flowing, and enables volumetric flow mapping combined with the corrected phase information. Furthermore, retinal and choroidal blood vessels can be simultaneously obtained by separating the B-scan into retinal part and choroidal parts using a simple segmentation algorithm along the RPE. After the compensation of axial displacements between neighbouring images, three-dimensional vasculature of ocular vessels has been visualized. Experiments were performed to demonstrate the effectiveness of the proposed method for 3-D vasculature imaging of human retina and choroid. The results revealed depth-resolved vasculatures in retina and choroid, suggesting that our approach can be used for noninvasive and three-dimensional angiography with a low-speed clinical OCT, and it has a great potential for clinic application.

  12. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  13. Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.

    PubMed

    Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang

    2013-06-05

    Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  15. Shading correction for cone-beam CT in radiotherapy: validation of dose calculation accuracy using clinical images

    NASA Astrophysics Data System (ADS)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2017-03-01

    Cone-beam CT (CBCT) images are routinely acquired to verify patient position in radiotherapy (RT), but are typically not calibrated in Hounsfield Units (HU) and feature non-uniformity due to X-ray scatter and detector persistence effects. This prevents direct use of CBCT for re-calculation of RT delivered dose. We previously developed a prior-image based correction method to restore HU values and improve uniformity of CBCT images. Here we validate the accuracy with which corrected CBCT can be used for dosimetric assessment of RT delivery, using CBCT images and RT plans for 45 patients including pelvis, lung and head sites. Dose distributions were calculated based on each patient's original RT plan and using CBCT image values for tissue heterogeneity correction. Clinically relevant dose metrics were calculated (e.g. median and minimum target dose, maximum organ at risk dose). Accuracy of CBCT based dose metrics was determined using an "override ratio" method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the image is assumed to be constant for each patient, allowing comparison to "gold standard" CT. For pelvis and head images the proportion of dose errors >2% was reduced from 40% to 1.3% after applying shading correction. For lung images the proportion of dose errors >3% was reduced from 66% to 2.2%. Application of shading correction to CBCT images greatly improves their utility for dosimetric assessment of RT delivery, allowing high confidence that CBCT dose calculations are accurate within 2-3%.

  16. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  17. Assessment of the Subgrid-Scale Models at Low and High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Horiuti, K.

    1996-01-01

    Accurate SGS models must be capable of correctly representing the energy transfer between GS and SGS. Recent direct assessment of the energy transfer carried out using direct numerical simulation (DNS) data for wall-bounded flows revealed that the energy exchange is not unidirectional. Although GS kinetic energy is transferred to the SGS (forward scatter (F-scatter) on average, SGS energy is also transferred to the GS. The latter energy exchange (backward scatter (B-scatter) is very significant, i.e., the local energy exchange can be backward nearly as often as forward and the local rate of B-scatter is considerably higher than the net rate of energy dissipation.

  18. Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy

    PubMed Central

    Cha, Jae Won; Ballesta, Jerome; So, Peter T.C.

    2010-01-01

    The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration. PMID:20799824

  19. Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy.

    PubMed

    Cha, Jae Won; Ballesta, Jerome; So, Peter T C

    2010-01-01

    The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration.

  20. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  1. Metadata-assisted nonuniform atmospheric scattering model of image haze removal for medium-altitude unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun

    2017-09-01

    Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.

  2. Iterative atmospheric correction scheme and the polarization color of alpine snow

    NASA Astrophysics Data System (ADS)

    Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond

    2012-07-01

    Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories.In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction.In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft.The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.

  3. Iterative Atmospheric Correction Scheme and the Polarization Color of Alpine Snow

    NASA Technical Reports Server (NTRS)

    Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond

    2012-01-01

    Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories. In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction. In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft. The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.

  4. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less

  5. Effect of centerbody scattering on propeller noise

    NASA Technical Reports Server (NTRS)

    Glegg, Stewart A. L.

    1991-01-01

    This paper describes how the effect of acoustic scattering from the hub or centerbody of a propeller will affect the far-field noise levels. A simple correction to Gutin's formula for steady loading noise is given. This is a maximum for the lower harmonics but has a negligible effect on the higher frequency components that are important subjectively. The case of a blade vortex interaction is also considered, and centerbody scattering is shown to have a significant effect on the acoustic far field.

  6. Relative importance of multiple scattering by air molecules and aerosols in forming the atmospheric path radiance in the visible and near-infrared parts of the spectrum.

    PubMed

    Antoine, D; Morel, A

    1998-04-20

    Single and multiple scattering by molecules or by atmospheric aerosols only (homogeneous scattering), and heterogeneous scattering by aerosols and molecules, are recorded in Monte Carlo simulations. It is shown that heterogeneous scattering (1) always contributes significantly to the path reflectance (rho(path)), (2) is realized at the expense of homogeneous scattering, (3) decreases when aerosols are absorbing, and (4) introduces deviations in the spectral dependencies of reflectances compared with the Rayleigh exponent and the aerosol angstrom exponent. The ratio of rho(path) to the Rayleigh reflectance for an aerosol-free atmosphere is linearly related to the aerosol optical thickness. This result provides a basis for a new scheme for atmospheric correction of remotely sensed ocean color observations.

  7. Trans-dimensional joint inversion of seabed scattering and reflection data.

    PubMed

    Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2013-03-01

    This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.

  8. Proton and neutron electromagnetic form factors and uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Zhihong; Arrington, John; Hill, Richard J.

    We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.

  9. Proton and neutron electromagnetic form factors and uncertainties

    DOE PAGES

    Ye, Zhihong; Arrington, John; Hill, Richard J.; ...

    2017-12-06

    We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.

  10. Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1985-01-01

    Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.

  11. Application and development of the Schwinger multichannel scattering theory and the partial differential equation theory of electron-molecule scattering

    NASA Technical Reports Server (NTRS)

    Weatherford, Charles A.

    1993-01-01

    One version of the multichannel theory for electron-target scattering based on the Schwinger variational principle, the SMC method, requires the introduction of a projection parameter. The role of the projection parameter a is investigated and it is shown that the principal-value operator in the SMC equation is Hermitian regardless of the value of a as long as it is real and nonzero. In a basis that is properly orthonormalizable, the matrix representation of this operator is also Hermitian. The use of such basis is consistent with the Schwinger variational principle because the Lippmann-Schwinger equation automatically builds in the correct boundary conditions. Otherwise, an auxiliary condition needs to be introduced, and Takatsuka and McKoy's original value of a is one of the three possible ways to achieve Hermiticity. In all cases but one, a can be uncoupled from the Hermiticity condition and becomes a free parameter. An equation for a based on the variational stability of the scattering amplitude is derived; its solution has an interesting property that the scattering amplitude from a converged SMC calculation is independent of the choice of a even though the SMC operator itself is a-dependent. This property provides a sensitive test of the convergence of the calculation. For a static-exchange calculation, the convergence requirement only depends on the completeness of the one-electron basis, but for a general multichannel case, the a-invariance in the scattering amplitude requires both the one-electron basis and the N plus 1-electron basis to be complete. The role of a in the SMC equation and the convergence property are illustrated using two examples: e-CO elastic scattering in the static-exchange approximation, and a two-state treatment of the e-H2 Chi(sup 1)Sigma(sub g)(+) yields b(sup 3)Sigma(sub u)(+) excitation.

  12. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, A. A.; Savelova, E. P., E-mail: ka98@mail.ru

    The problem of free-particle scattering on virtual wormholes is considered. It is shown that, for all types of relativistic fields, this scattering leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. A nonlinear correction that describes the back reaction of particles to the vacuum distribution of virtual wormholes is also obtained.

  14. Calibration correction of an active scattering spectrometer probe to account for refractive index of stratospheric aerosols

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.

    1990-01-01

    The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.

  15. The refractive index in electron microscopy and the errors of its approximations.

    PubMed

    Lentzen, M

    2017-05-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Preliminary Analysis of the Performance of the Landsat 8/OLI Land Surface Reflectance Product

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Claverie, Martin; Franch, Belen

    2016-01-01

    The surface reflectance, i.e., satellite derived top of atmosphere (TOA) reflectance corrected for the temporally, spatially and spectrally varying scattering and absorbing effects of atmospheric gases and aerosols, is needed to monitor the land surface reliably. For this reason, the surface reflectance, and not TOA reflectance, is used to generate the greater majority of global land products, for example, from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors. Even if atmospheric effects are minimized by sensor design, atmospheric effects are still challenging to correct. In particular, the strong impact of aerosols in the visible and near infrared spectral range can be difficult to correct, because they can be highly discrete in space and time (e.g., smoke plumes) and because of the complex scattering and absorbing properties of aerosols that vary spectrally and with aerosol size, shape, chemistry and density.

  17. Neutron scattering from 208Pb at 30.4 and 40.0 MeV and isospin dependence of the nucleon optical potential

    NASA Astrophysics Data System (ADS)

    Devito, R. P.; Khoa, Dao T.; Austin, Sam M.; Berg, U. E. P.; Loc, Bui Minh

    2012-02-01

    Background: Analysis of data involving nuclei far from stability often requires the optical potential (OP) for neutron scattering. Because neutron data are seldom available, whereas proton scattering data are more abundant, it is useful to have estimates of the difference of the neutron and proton optical potentials. This information is contained in the isospin dependence of the nucleon OP. Here we attempt to provide it for the nucleon-208Pb system.Purpose: The goal of this paper is to obtain accurate n+208Pb scattering data and use it, together with existing p+208Pb and 208Pb(p,n)208BiIAS* data, to obtain an accurate estimate of the isospin dependence of the nucleon OP at energies in the 30-60-MeV range.Method: Cross sections for n+208Pb scattering were measured at 30.4 and 40.0 MeV, with a typical relative (normalization) accuracy of 2-4% (3%). An angular range of 15∘ to 130∘ was covered using the beam-swinger time-of-flight system at Michigan State University. These data were analyzed by a consistent optical-model study of the neutron data and of elastic p+208Pb scattering at 45 and 54 MeV. These results were combined with a coupled-channel analysis of the 208Pb(p,n) reaction at 45 MeV, exciting the 0+ isobaric analog state (IAS) in 208Bi.Results: The new data and analysis give an accurate estimate of the isospin impurity of the nucleon-208Pb OP at 30.4 MeV caused by the Coulomb correction to the proton OP. The corrections to the real proton OP given by the CH89 global systematics were found to be only a few percent, whereas for the imaginary potential it was greater than 20% at the nuclear surface. On the basis of the analysis of the measured elastic n+208Pb data at 40 MeV, a Coulomb correction of similar strength and shape was also predicted for the p+208Pb OP at energies around 54 MeV.Conclusions: Accurate neutron scattering data can be used in combination with proton scattering data and (p,n) charge exchange data leading to the IAS to obtain reliable estimates of the isospin impurity of the nucleon OP.

  18. SCUSS u- BAND EMISSION AS A STAR-FORMATION-RATE INDICATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Zhimin; Zhou, Xu; Wu, Hong

    2017-01-20

    We present and analyze the possibility of using optical u- band luminosities to estimate star-formation rates (SFRs) of galaxies based on the data from the South Galactic Cap u band Sky Survey (SCUSS), which provides a deep u -band photometric survey covering about 5000 deg{sup 2} of the South Galactic Cap. Based on two samples of normal star-forming galaxies selected by the BPT diagram, we explore the correlations between u -band, H α , and IR luminosities by combing SCUSS data with the Sloan Digital Sky Survey and Wide-field Infrared Survey Explorer ( WISE ). The attenuation-corrected u -band luminositiesmore » are tightly correlated with the Balmer decrement-corrected H α luminosities with an rms scatter of ∼0.17 dex. The IR-corrected u luminosities are derived based on the correlations between the attenuation of u- band luminosities and WISE 12 (or 22) μ m luminosities, and then calibrated with the Balmer-corrected H α luminosities. The systematic residuals of these calibrations are tested against the physical properties over the ranges covered by our sample objects. We find that the best-fitting nonlinear relations are better than the linear ones and recommended to be applied in the measurement of SFRs. The systematic deviations mainly come from the pollution of old stellar population and the effect of dust extinction; therefore, a more detailed analysis is needed in future work.« less

  19. Scuss u-Band Emission as a Star-Formation-Rate Indicator

    NASA Astrophysics Data System (ADS)

    Zhou, Zhimin; Zhou, Xu; Wu, Hong; Fan, Xiao-Hui; Fan, Zhou; Jiang, Zhao-Ji; Jing, Yi-Peng; Li, Cheng; Lesser, Michael; Jiang, Lin-Hua; Ma, Jun; Nie, Jun-Dan; Shen, Shi-Yin; Wang, Jia-Li; Wu, Zhen-Yu; Zhang, Tian-Meng; Zou, Hu

    2017-01-01

    We present and analyze the possibility of using optical u-band luminosities to estimate star-formation rates (SFRs) of galaxies based on the data from the South Galactic Cap u band Sky Survey (SCUSS), which provides a deep u-band photometric survey covering about 5000 deg2 of the South Galactic Cap. Based on two samples of normal star-forming galaxies selected by the BPT diagram, we explore the correlations between u-band, Hα, and IR luminosities by combing SCUSS data with the Sloan Digital Sky Survey and Wide-field Infrared Survey Explorer (WISE). The attenuation-corrected u-band luminosities are tightly correlated with the Balmer decrement-corrected Hα luminosities with an rms scatter of ˜0.17 dex. The IR-corrected u luminosities are derived based on the correlations between the attenuation of u-band luminosities and WISE 12 (or 22) μm luminosities, and then calibrated with the Balmer-corrected Hα luminosities. The systematic residuals of these calibrations are tested against the physical properties over the ranges covered by our sample objects. We find that the best-fitting nonlinear relations are better than the linear ones and recommended to be applied in the measurement of SFRs. The systematic deviations mainly come from the pollution of old stellar population and the effect of dust extinction; therefore, a more detailed analysis is needed in future work.

  20. [Experimental research of turbidity influence on water quality monitoring of COD in UV-visible spectroscopy].

    PubMed

    Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang

    2014-11-01

    Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.

  1. 3D Tomographic SAR Imaging in Densely Vegetated Mountainous Rural Areas in China and Sweden

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J. P., , Prof

    2017-12-01

    3D SAR Tomography (TomoSAR) and 4D SAR Differential Tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to create an important new innovation of SAR Interferometry, to unscramble complex scenes with multiple scatterers mapped into the same SAR cell. In addition to this 3-D shape reconstruction and deformation solution in complex urban/infrastructure areas, and recent cryospheric ice investigations, emerging tomographic remote sensing applications include forest applications, e.g. tree height and biomass estimation, sub-canopy topographic mapping, and even search, rescue and surveillance. However, these scenes are characterized by temporal decorrelation of scatterers, orbital, tropospheric and ionospheric phase distortion and an open issue regarding possible height blurring and accuracy losses for TomoSAR applications particularly in densely vegetated mountainous rural areas. Thus, it is important to develop solutions for temporal decorrelation, orbital, tropospheric and ionospheric phase distortion.We report here on 3D imaging (especially in vertical layers) over densely vegetated mountainous rural areas using 3-D SAR imaging (SAR tomography) derived from data stacks of X-band COSMO-SkyMed Spotlight and L band ALOS-1 PALSAR data stacks over Dujiangyan Dam, Sichuan, China and L and P band airborne SAR data (BioSAR 2008 - ESA) in the Krycklan river catchment, Northern Sweden. The new TanDEM-X 12m DEM is used to assist co - registration of all the data stacks over China first. Then, atmospheric correction is being assessed using weather model data such as ERA-I, MERRA, MERRA-2, WRF; linear phase-topography correction and MODIS spectrometer correction will be compared and ionospheric correction methods are discussed to remove tropospheric and ionospheric delay. Then the new TomoSAR method with the TanDEM-X 12m DEM is described to obtain the number of scatterers inside each pixel, the scattering amplitude and phase of each scatterer and finally extract tomograms (imaging), their 3D positions and motion parameters (deformation). A progress report will be shown on these different aspects.This work is partially supported by the CSC and UCL MAPS Dean prize through a PhD studentship at UCL-MSSL.

  2. An analytic formula for the relativistic incoherent Thomson backscattering spectrum for a drifting bi-Maxwellian plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naito, O.

    2015-08-15

    An analytic formula has been derived for the relativistic incoherent Thomson backscattering spectrum for a drifting anisotropic plasma when the scattering vector is parallel to the drifting direction. The shape of the scattering spectrum is insensitive to the electron temperature perpendicular to the scattering vector, but its amplitude may be modulated. As a result, while the measured temperature correctly represents the electron distribution parallel to the scattering vector, the electron density may be underestimated when the perpendicular temperature is higher than the parallel temperature. Since the scattering spectrum in shorter wavelengths is greatly enhanced by the existence of drift, themore » diagnostics might be used to measure local electron current density in fusion plasmas.« less

  3. Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.

    2017-12-01

    We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.

  4. Vertical spatial coherence model for a transient signal forward-scattered from the sea surface

    USGS Publications Warehouse

    Yoerger, E.J.; McDaniel, S.T.

    1996-01-01

    The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.

  5. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  6. A polarimetric scattering database for non-spherical ice particles at microwave wavelengths

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Jiang, Zhiyuan; Aydin, Kultegin; Verlinde, Johannes; Clothiaux, Eugene E.; Botta, Giovanni

    2016-10-01

    The atmospheric science community has entered a period in which electromagnetic scattering properties at microwave frequencies of realistically constructed ice particles are necessary for making progress on a number of fronts. One front includes retrieval of ice-particle properties and signatures from ground-based, airborne, and satellite-based radar and radiometer observations. Another front is evaluation of model microphysics by application of forward operators to their outputs and comparison to observations during case study periods. Yet a third front is data assimilation, where again forward operators are applied to databases of ice-particle scattering properties and the results compared to observations, with their differences leading to corrections of the model state. Over the past decade investigators have developed databases of ice-particle scattering properties at microwave frequencies and made them openly available. Motivated by and complementing these earlier efforts, a database containing polarimetric single-scattering properties of various types of ice particles at millimeter to centimeter wavelengths is presented. While the database presented here contains only single-scattering properties of ice particles in a fixed orientation, ice-particle scattering properties are computed for many different directions of the radiation incident on them. These results are useful for understanding the dependence of ice-particle scattering properties on ice-particle orientation with respect to the incident radiation. For ice particles that are small compared to the wavelength, the number of incident directions of the radiation is sufficient to compute reasonable estimates of their (randomly) orientation-averaged scattering properties. This database is complementary to earlier ones in that it contains complete (polarimetric) scattering property information for each ice particle - 44 plates, 30 columns, 405 branched planar crystals, 660 aggregates, and 640 conical graupel - and direction of incident radiation but is limited to four frequencies (X-, Ku-, Ka-, and W-bands), does not include temperature dependencies of the single-scattering properties, and does not include scattering properties averaged over randomly oriented ice particles. Rules for constructing the morphologies of ice particles from one database to the next often differ; consequently, analyses that incorporate all of the different databases will contain the most variability, while illuminating important differences between them. Publication of this database is in support of future analyses of this nature and comes with the hope that doing so helps contribute to the development of a database standard for ice-particle scattering properties, like the NetCDF (Network Common Data Form) CF (Climate and Forecast) or NetCDF CF/Radial metadata conventions.

  7. Evaluation of GLI Reflectance and Vegetation Indices With MODIS Products

    DTIC Science & Technology

    2005-07-25

    collective picture of a warming world and other changes in the climate system . Vegetation over land surfaces contains carbon that is re- leased to atmosphere...irradiance based on Thuiller 2002 (Thuiller et al., 2003), Lsat[W/m2/str/µm] is GLI observed radiance, and θs[rad] is solor zenith angle. The GLI Project...longer wavelength than 2500nm, MODTRAN4.0 IR solor irradiance is used. GLI atmospheric correction for land is conducted for Rayleigh scattering and

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldes, Iason; Bell, Nicole F.; Millar, Alexander J.

    We explore possible asymmetric dark matter models using CP violating scatterings to generate an asymmetry. In particular, we introduce a new model, based on DM fields coupling to the SM Higgs and lepton doublets, a neutrino portal, and explore its UV completions. We study the CP violation and asymmetry formation of this model, to demonstrate that it is capable of producing the correct abundance of dark matter and the observed matter-antimatter asymmetry. Crucial to achieving this is the introduction of interactions which violate CP with a T{sup 2} dependence.

  9. Heavy-quark production in gluon fusion at two loops in QCD

    NASA Astrophysics Data System (ADS)

    Czakon, M.; Mitov, A.; Moch, S.

    2008-07-01

    We present the two-loop virtual QCD corrections to the production of heavy quarks in gluon fusion. The results are exact in the limit when all kinematical invariants are large compared to the mass of the heavy quark up to terms suppressed by powers of the heavy-quark mass. Our derivation uses a simple relation between massless and massive QCD scattering amplitudes as well as a direct calculation of the massive amplitude at two loops. The results presented here together with those obtained previously for quark-quark scattering form important parts of the next-to-next-to-leading order QCD corrections to heavy-quark production in hadron-hadron collisions.

  10. Comment on the modified Beer-Lambert law for scattering media.

    PubMed

    Sassaroli, Angelo; Fantini, Sergio

    2004-07-21

    We present a concise overview of the modified Beer-Lambert law, which has been extensively used in the literature of near-infrared spectroscopy (NIRS) of scattering media. In particular, we discuss one form of the modified Beer-Lambert law that is commonly found in the literature and that is not strictly correct. However, this incorrect form of the modified Beer-Lambert law still leads to the correct expression for the changes in the continuous wave optical signal associated with changes in the absorption coefficient of the investigated medium. Here we propose a notation for the modified Beer-Lambert law that keeps the typical form commonly found in the literature without introducing any incorrect assumptions.

  11. Investigating Aerosol Morphology Using Scattering Phase Functions Measured with a Laser Imaging Nephelometer

    NASA Astrophysics Data System (ADS)

    Manfred, K.; Adler, G. A.; Erdesz, F.; Franchin, A.; Lamb, K. D.; Schwarz, J. P.; Wagner, N.; Washenfelder, R. A.; Womack, C.; Murphy, D. M.

    2017-12-01

    Particle morphology has important implications for light scattering and radiative transfer, but can be difficult to measure. Biomass burning and other important aerosol sources can generate a mixture of both spherical and non-spherical particle morphologies, and it is necessary to represent these populations correctly in models. We describe a laser imaging nephelometer that measures the unpolarized scattering phase function of bulk aerosol at 375 and 405 nm using a wide-angle lens and CCD. We deployed this instrument to the Missoula Fire Sciences Laboratory to measure biomass burning aerosol morphology from controlled fires during the recent FIREX intensive laboratory study. Total integrated scattering signal agreed with that determined by a cavity ring-down photoacoustic spectrometer system and a traditional integrating nephelometer within instrument uncertainties. We compared measured scattering phase functions at 405 nm to theoretical models for spherical (Mie) and fractal (Rayleigh-Debye-Gans) particle morphologies based on the size distribution reported by an optical particle counter. We show that particle morphology can vary dramatically for different fuel types, and present results for two representative fires (pine tree vs arid shrub). We find that Mie theory is inadequate to describe the actual behavior of realistic aerosols from biomass burning in some situations. This study demonstrates the capabilities of the laser imaging nephelometer instrument to provide real-time, in situ information about dominant particle morphology that is vital for accurate radiative transfer calculations.

  12. Radiance and polarization of multiple scattered light from haze and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-08-01

    The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.

  13. Positronium collisions with molecular nitrogen

    NASA Astrophysics Data System (ADS)

    Wilde, R. S.; Fabrikant, I. I.

    2018-05-01

    For many atomic and molecular targets positronium (Ps) scattering looks very similar to electron scattering if total scattering cross sections are plotted as functions of the projectile velocity. Recently this similarity was observed for the resonant scattering by the N2 molecule. For correct treatment of Ps-molecule scattering incorporation of the exchange interaction and short-range correlations is of paramount importance. In the present work we have used a free-electron-gas model to describe these interactions in collisions of Ps with the N2 molecule. The results agree reasonably well with the experiment, but the position of the resonance is somewhat shifted towards lower energies, probably due to the fixed-nuclei approximation employed in the calculations. The partial-wave analysis of the resonant peak shows that its composition is more complex than in the case of e -N2 scattering.

  14. Thomson-scattering measurements in the collective and noncollective regimes in laser produced plasmas (invited).

    PubMed

    Ross, J S; Glenzer, S H; Palastro, J P; Pollock, B B; Price, D; Tynan, G R; Froula, D H

    2010-10-01

    We present simultaneous Thomson-scattering measurements of light scattered from ion-acoustic and electron-plasma fluctuations in a N(2) gas jet plasma. By varying the plasma density from 1.5×10(18) to 4.0×10(19) cm(-3) and the temperature from 100 to 600 eV, we observe the transition from the collective regime to the noncollective regime in the high-frequency Thomson-scattering spectrum. These measurements allow an accurate local measurement of fundamental plasma parameters: electron temperature, density, and ion temperature. Furthermore, experiments performed in the high densities typically found in laser produced plasmas result in scattering from electrons moving near the phase velocity of the relativistic plasma waves. Therefore, it is shown that even at low temperatures relativistic corrections to the scattered power must be included.

  15. Three-dimensional dosimetry of small megavoltage radiation fields using radiochromic gels and optical CT scanning

    NASA Astrophysics Data System (ADS)

    Babic, Steven; McNiven, Andrea; Battista, Jerry; Jordan, Kevin

    2009-04-01

    The dosimetry of small fields as used in stereotactic radiotherapy, radiosurgery and intensity-modulated radiation therapy can be challenging and inaccurate due to partial volume averaging effects and possible disruption of charged particle equilibrium. Consequently, there exists a need for an integrating, tissue equivalent dosimeter with high spatial resolution to avoid perturbing the radiation beam and artificially broadening the measured beam penumbra. In this work, radiochromic ferrous xylenol-orange (FX) and leuco crystal violet (LCV) micelle gels were used to measure relative dose factors (RDFs), percent depth dose profiles and relative lateral beam profiles of 6 MV x-ray pencil beams of diameter 28.1, 9.8 and 4.9 mm. The pencil beams were produced via stereotactic collimators mounted on a Varian 2100 EX linear accelerator. The gels were read using optical computed tomography (CT). Data sets were compared quantitatively with dosimetric measurements made with radiographic (Kodak EDR2) and radiochromic (GAFChromic® EBT) film, respectively. Using a fast cone-beam optical CT scanner (Vista™), corrections for diffusion in the FX gel data yielded RDFs that were comparable to those obtained by minimally diffusing LCV gels. Considering EBT film-measured RDF data as reference, cone-beam CT-scanned LCV gel data, corrected for scattered stray light, were found to be in agreement within 0.5% and -0.6% for the 9.8 and 4.9 mm diameter fields, respectively. The validity of the scattered stray light correction was confirmed by general agreement with RDF data obtained from the same LCV gel read out with a laser CT scanner that is less prone to the acceptance of scattered stray light. Percent depth dose profiles and lateral beam profiles were found to agree within experimental error for the FX gel (corrected for diffusion), LCV gel (corrected for scattered stray light), and EBT and EDR2 films. The results from this study reveal that a three-dimensional dosimetry method utilizing optical CT-scanned radiochromic gels allows for the acquisition of a self-consistent volumetric data set in a single exposure, with sufficient spatial resolution to accurately characterize small fields.

  16. Atmospheric correction of the ocean color observations of the medium resolution imaging spectrometer (MERIS)

    NASA Astrophysics Data System (ADS)

    Antoine, David; Morel, Andre

    1997-02-01

    An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.

  17. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  18. Characterizing the behavior of scattered radiation in multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, Artur; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2017-04-01

    Scattered radiation results in various undesirable effects in medical diagnostics, non-destructive testing (NDT) and security x-ray imaging. Despite numerous studies characterizing this phenomenon and its effects, the knowledge of its behavior in the energy domain remains limited. The present study aims at summarizing some key insights on scattered radiation originating from the inspected object. In addition, various simulations and experiments with limited collimation on both simplified and realistic phantoms were conducted in order to study scatter behavior in multi-energy x-ray imaging. Results showed that the spectrum shape of the scatter component can be considered preserved in the first approximation across the image plane for various acquisition geometries and phantoms. The variations exhibited by the scatter spectrum were below 10% for most examined cases. Furthermore, the corresponding spectrum shape proved to be also relatively invariant for different experimental angular projections of one of the examined phantoms. The observed property of scattered radiation can potentially lead to the decoupling of spatial and energy scatter components, which can in turn enable speed ups in scatter simulations and reduce the complexity of scatter correction.

  19. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  20. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid.

    PubMed

    Stankovic, Uros; van Herk, Marcel; Ploeger, Lennert S; Sonke, Jan-Jakob

    2014-06-01

    Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study were t(cup) for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.

  1. Scattering of Acoustic Waves from Ocean Boundaries

    DTIC Science & Technology

    2015-09-30

    of buried mines and improve SONAR performance in shallow water. OBJECTIVES 1) Determination of the correct physical model of acoustic propagation... acoustic parameters in the ocean. APPROACH 1) Finite Element Modeling for Range Dependent Waveguides: Finite element modeling is applied to a...roughness measurements for reverberation modeling . GLISTEN data provide insight into the role of biology on acoustic propagation and scattering

  2. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  3. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  4. Flux or speed? Examining speckle contrast imaging of vascular flows

    PubMed Central

    Kazmi, S. M. Shams; Faraji, Ehssan; Davis, Mitchell A.; Huang, Yu-Yen; Zhang, Xiaojing J.; Dunn, Andrew K.

    2015-01-01

    Speckle contrast imaging enables rapid mapping of relative blood flow distributions using camera detection of back-scattered laser light. However, speckle derived flow measures deviate from direct measurements of erythrocyte speeds by 47 ± 15% (n = 13 mice) in vessels of various calibers. Alternatively, deviations with estimates of volumetric flux are on average 91 ± 43%. We highlight and attempt to alleviate this discrepancy by accounting for the effects of multiple dynamic scattering with speckle imaging of microfluidic channels of varying sizes and then with red blood cell (RBC) tracking correlated speckle imaging of vascular flows in the cerebral cortex. By revisiting the governing dynamic light scattering models, we test the ability to predict the degree of multiple dynamic scattering across vessels in order to correct for the observed discrepancies between relative RBC speeds and multi-exposure speckle imaging estimates of inverse correlation times. The analysis reveals that traditional speckle contrast imagery of vascular flows is neither a measure of volumetric flux nor particle speed, but rather the product of speed and vessel diameter. The corrected speckle estimates of the relative RBC speeds have an average 10 ± 3% deviation in vivo with those obtained from RBC tracking. PMID:26203384

  5. Flux or speed? Examining speckle contrast imaging of vascular flows.

    PubMed

    Kazmi, S M Shams; Faraji, Ehssan; Davis, Mitchell A; Huang, Yu-Yen; Zhang, Xiaojing J; Dunn, Andrew K

    2015-07-01

    Speckle contrast imaging enables rapid mapping of relative blood flow distributions using camera detection of back-scattered laser light. However, speckle derived flow measures deviate from direct measurements of erythrocyte speeds by 47 ± 15% (n = 13 mice) in vessels of various calibers. Alternatively, deviations with estimates of volumetric flux are on average 91 ± 43%. We highlight and attempt to alleviate this discrepancy by accounting for the effects of multiple dynamic scattering with speckle imaging of microfluidic channels of varying sizes and then with red blood cell (RBC) tracking correlated speckle imaging of vascular flows in the cerebral cortex. By revisiting the governing dynamic light scattering models, we test the ability to predict the degree of multiple dynamic scattering across vessels in order to correct for the observed discrepancies between relative RBC speeds and multi-exposure speckle imaging estimates of inverse correlation times. The analysis reveals that traditional speckle contrast imagery of vascular flows is neither a measure of volumetric flux nor particle speed, but rather the product of speed and vessel diameter. The corrected speckle estimates of the relative RBC speeds have an average 10 ± 3% deviation in vivo with those obtained from RBC tracking.

  6. Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging

    NASA Astrophysics Data System (ADS)

    Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.

    The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.

  7. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  8. Digital adaptive optics confocal microscopy based on iterative retrieval of optical aberration from a guidestar hologram

    PubMed Central

    Liu, Changgeng; Thapa, Damber; Yao, Xincheng

    2017-01-01

    Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO. PMID:28380937

  9. In Situ Aerosol Detector

    NASA Technical Reports Server (NTRS)

    Vakhtin, Andrei; Krasnoperov, Lev

    2011-01-01

    An affordable technology designed to facilitate extensive global atmospheric aerosol measurements has been developed. This lightweight instrument is compatible with newly developed platforms such as tethered balloons, blimps, kites, and even disposable instruments such as dropsondes. This technology is based on detection of light scattered by aerosol particles where an optical layout is used to enhance the performance of the laboratory prototype instrument, which allows detection of smaller aerosol particles and improves the accuracy of aerosol particle size measurement. It has been determined that using focused illumination geometry without any apertures is advantageous over using the originally proposed collimated beam/slit geometry (that is supposed to produce uniform illumination over the beam cross-section). The illumination source is used more efficiently, which allows detection of smaller aerosol particles. Second, the obtained integral scattered light intensity measured for the particle can be corrected for the beam intensity profile inhomogeneity based on the measured beam intensity profile and measured particle location. The particle location (coordinates) in the illuminated sample volume is determined based on the information contained in the image frame. The procedure considerably improves the accuracy of determination of the aerosol particle size.

  10. Dispersive analysis of the scalar form factor of the nucleon

    NASA Astrophysics Data System (ADS)

    Hoferichter, M.; Ditsche, C.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Based on the recently proposed Roy-Steiner equations for pion-nucleon ( πN) scattering [1], we derive a system of coupled integral equations for the π π to overline N N and overline K K to overline N N S-waves. These equations take the form of a two-channel Muskhelishvili-Omnès problem, whose solution in the presence of a finite matching point is discussed. We use these results to update the dispersive analysis of the scalar form factor of the nucleon fully including overline K K intermediate states. In particular, we determine the correction {Δ_{σ }} = σ ( {2M_{π }^2} ) - {σ_{{π N}}} , which is needed for the extraction of the pion-nucleon σ term from πN scattering, as a function of pion-nucleon subthreshold parameters and the πN coupling constant.

  11. Neutron Angular Scatter Effects in 3DHZETRN: Quasi-Elastic

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Werneth, Charles M.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2017-01-01

    The current 3DHZETRN code has a detailed three dimensional (3D) treatment of neutron transport based on a forward/isotropic assumption and has been compared to Monte Carlo (MC) simulation codes in various geometries. In most cases, it has been found that 3DHZETRN agrees with the MC codes to the extent they agree with each other. However, a recent study of neutron leakage from finite geometries revealed that further improvements to the 3DHZETRN formalism are needed. In the present report, angular scattering corrections to the neutron fluence are provided in an attempt to improve fluence estimates from a uniform sphere. It is found that further developments in the nuclear production models are required to fully evaluate the impact of transport model updates. A model for the quasi-elastic neutron production spectra is therefore developed and implemented into 3DHZETRN.

  12. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.

    PubMed

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-07-24

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.

  13. Properties of dust and clouds in the Mars atmosphere: Analysis of Viking IRTM emission phase function sequences

    NASA Technical Reports Server (NTRS)

    Clancy, R. T.; Lee, S. W.

    1991-01-01

    An analysis of emission-phase-function (EPF) observations from the Viking Orbiter Infrared Thermal Mapper (IRTM) yields a wide variety of results regarding dust and cloud scattering in the Mars atmosphere and atmospheric-corrected albedos for the surface of Mars. A multiple scattering radiative transfer model incorporating a bidirectional phase function for the surface and atmospheric scattering by dust and clouds is used to derive surface albedos and dust and ice optical properties and optical depths for these various conditions on Mars.

  14. Partial Wave Dispersion Relations: Application to Electron-Atom Scattering

    NASA Technical Reports Server (NTRS)

    Temkin, A.; Drachman, Richard J.

    1999-01-01

    In this Letter we propose the use of partial wave dispersion relations (DR's) as the way of solving the long-standing problem of correctly incorporating exchange in a valid DR for electron-atom scattering. In particular a method is given for effectively calculating the contribution of the discontinuity and/or poles of the partial wave amplitude which occur in the negative E plane. The method is successfully tested in three cases: (i) the analytically solvable exponential potential, (ii) the Hartree potential, and (iii) the S-wave exchange approximation for electron-hydrogen scattering.

  15. Ultracold collisions between spin-orbit-coupled dipoles: General formalism and universality

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hougaard, Christiaan R.; Mulkerin, Brendan C.; Liu, Xia-Ji

    2018-04-01

    A theoretical study of the low-energy scattering properties of two aligned identical bosonic and fermionic dipoles in the presence of isotropic spin-orbit coupling is presented. A general treatment of particles with arbitrary (pseudo)spin is given in the framework of multichannel scattering. At ultracold temperatures and away from shape resonances or closed-channel dominated resonances, the cross section can be well described within the Born approximation to within corrections due to the s -wave scattering. We compare our findings with numerical calculations and find excellent agreement.

  16. A phenomenological π-p scattering length from pionic hydrogen

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Wycech, S.

    2004-07-01

    We derive a closed, model independent, expression for the electromagnetic correction factor to a phenomenological hadronic scattering length ah extracted from a hydrogenic atom. It is obtained in a non-relativistic approach and in the limit of a short ranged hadronic interaction to terms of order α2logα using an extended charge distribution. A hadronic πN scattering length ahπ-p=0.0870(5)mπ-1 is deduced leading to a πNN coupling constant from the GMO relation gc2/(4π)=14.04(17).

  17. Testing the Two-Layer Model for Correcting Near Cloud Reflectance Enhancement Using LES SHDOM Simulated Radiances

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert

    2016-01-01

    A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.

  18. Optical and morphological properties of Cirrus clouds determined by the high spectral resolution lidar during FIRE

    NASA Technical Reports Server (NTRS)

    Grund, Christian John; Eloranta, Edwin W.

    1990-01-01

    Cirrus clouds reflect incoming solar radiation and trap outgoing terrestrial radiation; therefore, accurate estimation of the global energy balance depends upon knowledge of the optical and physical properties of these clouds. Scattering and absorption by cirrus clouds affect measurements made by many satellite-borne and ground based remote sensors. Scattering of ambient light by the cloud, and thermal emissions from the cloud can increase measurement background noise. Multiple scattering processes can adversely affect the divergence of optical beams propagating through these clouds. Determination of the optical thickness and the vertical and horizontal extent of cirrus clouds is necessary to the evaluation of all of these effects. Lidar can be an effective tool for investigating these properties. During the FIRE cirrus IFO in Oct. to Nov. 1986, the High Spectral Resolution Lidar (HSRL) was operated from a rooftop site on the campus of the University of Wisconsin at Madison, Wisconsin. Approximately 124 hours of fall season data were acquired under a variety of cloud optical thickness conditions. Since the IFO, the HSRL data set was expanded by more than 63.5 hours of additional data acquired during all seasons. Measurements are presented for the range in optical thickness and backscattering phase function of the cirrus clouds, as well as contour maps of extinction corrected backscatter cross sections indicating cloud morphology. Color enhanced images of range-time indicator (RTI) displays a variety of cirrus clouds with approximately 30 sec time resolution are presented. The importance of extinction correction on the interpretation of cloud height and structure from lidar observations of optically thick cirrus are demonstrated.

  19. Experimental validation of a multi-energy x-ray adapted scatter separation method

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-12-01

    Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.

  20. On the Compton scattering redistribution function in plasma

    NASA Astrophysics Data System (ADS)

    Madej, J.; Różańska, A.; Majczyna, A.; Należyty, M.

    2017-08-01

    Compton scattering is the dominant opacity source in hot neutron stars, accretion discs around black holes and hot coronae. We collected here a set of numerical expressions of the Compton scattering redistribution functions (RFs) for unpolarized radiation, which are more exact than the widely used Kompaneets equation. The principal aim of this paper is the presentation of the RF by Guilbert, which is corrected for the computational errors in the original paper. This corrected RF was used in the series of papers on model atmosphere computations of hot neutron stars. We have also organized four existing algorithms for the RF computations into a unified form ready to use in radiative transfer and model atmosphere codes. The exact method by Nagirner & Poutanen was numerically compared to all other algorithms in a very wide spectral range from hard X-rays to radio waves. Sample computations of the Compton scattering RFs in thermal plasma were done for temperatures corresponding to the atmospheres of bursting neutron stars and hot intergalactic medium. Our formulae are also useful to study the Compton scattering of unpolarized microwave background radiation in hot intracluster gas and the Sunyaev-Zeldovich effect. We conclude that the formulae by Guilbert and the exact quantum mechanical formulae yield practically the same RFs for gas temperatures relevant to the atmospheres of X-ray bursting neutron stars, T ≤ 108 K.

  1. Elastic/Inelastic Measurement Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yates, Steven; Hicks, Sally; Vanhoy, Jeffrey

    2016-03-01

    The work scope involves the measurement of neutron scattering from natural sodium ( 23Na) and two isotopes of iron, 56Fe and 54Fe. Angular distributions, i.e., differential cross sections, of the scattered neutrons will be measured for 5 to 10 incident neutron energies per year. The work of the first year concentrates on 23Na, while the enriched iron samples are procured. Differential neutron scattering cross sections provide information to guide nuclear reaction model calculations in the low-­energy (few MeV) fast-­neutron region. This region lies just above the isolated resonance region, which in general is well studied; however, model calculations are difficultmore » in this region because overlapping resonance structure is evident and direct nuclear reactions are becoming important. The standard optical model treatment exhibits good predictive ability for the wide-­region average cross sections but cannot treat the overlapping resonance features. In addition, models that do predict the direct reaction component must be guided by measurements to describe correctly the strength of the direct component, e.g., β 2 must be known to describe the direct component of the scattering to the first excited state. Measurements of the elastic scattering differential cross sections guide the optical model calculations, while inelastic differential cross sections provide the crucial information for correctly describing the direct component. Activities occurring during the performance period are described.« less

  2. Incoherent-scatter computed tomography with monochromatic synchrotron x ray: feasibility of multi-CT imaging system for simultaneous measurement-of fluorescent and incoherent scatter x rays

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-10-01

    We describe a new system of incoherent scatter computed tomography (ISCT) using monochromatic synchrotron X rays, and we discuss its potential to be used in in vivo imaging for medical use. The system operates on the basis of computed tomography (CT) of the first generation. The reconstruction method for ISCT uses the least squares method with singular value decomposition. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. An acrylic cylindrical phantom of 20-mm diameter containing a cross-shaped channel was imaged. The channel was filled with a diluted iodine solution with a concentration of 200 /spl mu/gI/ml. Spectra obtained with the system's high purity germanium (HPGe) detector separated the incoherent X-ray line from the other notable peaks, i.e., the iK/sub /spl alpha// and K/sub /spl beta/1/ X-ray fluorescent lines and the coherent scattering peak. CT images were reconstructed from projections generated by integrating the counts In the energy window centering around the incoherent scattering peak and whose width was approximately 2 keV. The reconstruction routine employed an X-ray attenuation correction algorithm. The resulting image showed more homogeneity than one without the attenuation correction.

  3. Systematic approach to thermal leptogenesis

    NASA Astrophysics Data System (ADS)

    Frossard, T.; Garny, M.; Hohenegger, A.; Kartavtsev, A.; Mitrouskas, D.

    2013-04-01

    In this work we study thermal leptogenesis using nonequilibrium quantum field theory. Starting from fundamental equations for correlators of the quantum fields we describe the steps necessary to obtain quantum-kinetic equations for quasiparticles. These can easily be compared to conventional results and overcome conceptional problems inherent in the canonical approach. Beyond CP-violating decays we include also those scattering processes which are tightly related to the decays in a consistent approximation of fourth order in the Yukawa couplings. It is demonstrated explicitly how the S-matrix elements for the scattering processes in the conventional approach are related to two- and three-loop contributions to the effective action. We derive effective decay and scattering amplitudes taking medium corrections and thermal masses into account. In this context we also investigate CP-violating Higgs decay within the same formalism. From the kinetic equations we derive rate equations for the lepton asymmetry improved in that they include quantum-statistical effects and medium corrections to the quasiparticle properties.

  4. Measuring nanometre-scale electric fields in scanning transmission electron microscopy using segmented detectors.

    PubMed

    Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D

    2017-11-01

    Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Experimental and theoretical electron-scattering cross-section data for dichloromethane

    NASA Astrophysics Data System (ADS)

    Krupa, K.; Lange, E.; Blanco, F.; Barbosa, A. S.; Pastega, D. F.; Sanchez, S. d'A.; Bettega, M. H. F.; García, G.; Limão-Vieira, P.; Ferreira da Silva, F.

    2018-04-01

    We report on a combination of experimental and theoretical investigations into the elastic differential cross sections (DCSs) and integral cross sections for electron interactions with dichloromethane, C H2C l2 , in the incident electron energy over the 7.0-30 eV range. Elastic electron-scattering cross-section calculations have been performed within the framework of the Schwinger multichannel method implemented with pseudopotentials (SMCPP), and the independent-atom model with screening-corrected additivity rule including interference-effects correction (IAM-SCAR+I). The present elastic DCSs have been found to agree reasonably well with the results of IAM-SCAR+I calculations above 20 eV and also with the SMC calculations below 30 eV. Although some discrepancies were found for 7 eV, the agreement between the two theoretical methodologies is remarkable as the electron-impact energy increases. Calculated elastic DCSs are also reported up to 10000 eV for scattering angles from 0° to 180° together with total cross section within the IAM-SCAR+I framework.

  6. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  7. Low-resolution mapping of the effective attenuation coefficient of the human head: a multidistance approach applied to high-density optical recordings

    PubMed Central

    Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele

    2017-01-01

    Abstract. Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source–detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked. PMID:28466026

  8. Low-resolution mapping of the effective attenuation coefficient of the human head: a multidistance approach applied to high-density optical recordings.

    PubMed

    Chiarelli, Antonio M; Maclin, Edward L; Low, Kathy A; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele

    2017-04-01

    Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source-detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked.

  9. High-resolution imaging of the large non-human primate brain using microPET: a feasibility study

    NASA Astrophysics Data System (ADS)

    Naidoo-Variawa, S.; Hey-Cunningham, A. J.; Lehnert, W.; Kench, P. L.; Kassiou, M.; Banati, R.; Meikle, S. R.

    2007-11-01

    The neuroanatomy and physiology of the baboon brain closely resembles that of the human brain and is well suited for evaluating promising new radioligands in non-human primates by PET and SPECT prior to their use in humans. These studies are commonly performed on clinical scanners with 5 mm spatial resolution at best, resulting in sub-optimal images for quantitative analysis. This study assessed the feasibility of using a microPET animal scanner to image the brains of large non-human primates, i.e. papio hamadryas (baboon) at high resolution. Factors affecting image accuracy, including scatter, attenuation and spatial resolution, were measured under conditions approximating a baboon brain and using different reconstruction strategies. Scatter fraction measured 32% at the centre of a 10 cm diameter phantom. Scatter correction increased image contrast by up to 21% but reduced the signal-to-noise ratio. Volume resolution was superior and more uniform using maximum a posteriori (MAP) reconstructed images (3.2-3.6 mm3 FWHM from centre to 4 cm offset) compared to both 3D ordered subsets expectation maximization (OSEM) (5.6-8.3 mm3) and 3D reprojection (3DRP) (5.9-9.1 mm3). A pilot 18F-2-fluoro-2-deoxy-d-glucose ([18F]FDG) scan was performed on a healthy female adult baboon. The pilot study demonstrated the ability to adequately resolve cortical and sub-cortical grey matter structures in the baboon brain and improved contrast when images were corrected for attenuation and scatter and reconstructed by MAP. We conclude that high resolution imaging of the baboon brain with microPET is feasible with appropriate choices of reconstruction strategy and corrections for degrading physical effects. Further work to develop suitable correction algorithms for high-resolution large primate imaging is warranted.

  10. Hard two-photon contribution to elastic lepton-proton scattering determined by the OLYMPUS experiment

    NASA Astrophysics Data System (ADS)

    Hasell, D. K.; OLYMPUS Collaboration

    2018-02-01

    The OLYMPUS collaboration has recently made a precise measurement of the positron-proton to electron-proton elastic scattering cross section ratio, R 2γ, over a wide range of the virtual photon polarization, 0.456 < ɛ < 0.978. This provides a direct measure of hard two-photon exchange in elastic lepton-proton scattering widely thought to explain the discrepancy observed between unpolarized and polarized measurements of the proton form factor ratio, {μ }p{G}Ep/{G}Mp. The OLYMPUS results are small, within 1% on unity, over the range of momentum transfers measured and significantly lower than theoretical calculations that can explain part of the observed discrepancy in terms of two-photon exchange at higher momentum transfers. However, the results are in reasonable agreement with predictions based on phenomenological fits to the available form factor data. The motivation for measuring R 2γ will be presented followed by a description of the OLYMPUS experiment. The importance of radiative corrections in the analysis will be shown also. Then we will present the OLYMPUS results and compare with results from two similar experiments and theoretical calculations.

  11. Wavefront shaping to correct intraocular scattering

    NASA Astrophysics Data System (ADS)

    Artal, Pablo; Arias, Augusto; Fernández, Enrique

    2018-02-01

    Cataracts is a common ocular pathology that increases the amount of intraocular scattering. It degrades the quality of vision by both blur and contrast reduction of the retinal images. In this work, we propose a non-invasive method, based on wavefront shaping (WS), to minimize cataract effects. For the experimental demonstration of the method, a liquid crystal on silicon (LCoS) spatial light modulator was used for both reproduction and reduction of the realistic cataracts effects. The LCoS area was separated in two halves conjugated with the eye's pupil by a telescope with unitary magnification. Thus, while the phase maps that induced programmable amounts of intraocular scattering (related to cataract severity) were displayed in a one half of the LCoS, sequentially testing wavefronts were displayed in the second one. Results of the imaging improvements were visually evaluated by subjects with no known ocular pathology seeing through the instrument. The diffracted intensity of exit pupil is analyzed for the feedback of the implemented algorithms in search for the optimum wavefront. Numerical and experimental results of the imaging improvements are presented and discussed.

  12. Modelling the physics in iterative reconstruction for transmission computed tomography

    PubMed Central

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  13. Experimental study of the electric dipole strength in the even Mo nuclei and its deformation dependence

    NASA Astrophysics Data System (ADS)

    Erhard, M.; Junghans, A. R.; Nair, C.; Schwengner, R.; Beyer, R.; Klug, J.; Kosev, K.; Wagner, A.; Grosse, E.

    2010-03-01

    Two methods based on bremsstrahlung were applied to the stable even Mo isotopes for the experimental determination of the photon strength function covering the high excitation energy range above 4 MeV with its increasing level density. Photon scattering was used up to the neutron separation energies Sn and data up to the maximum of the isovector giant resonance (GDR) were obtained by photoactivation. After a proper correction for multistep processes the observed quasicontinuous spectra of scattered photons show a remarkably good match to the photon strengths derived from nuclear photoeffect data obtained previously by neutron detection and corrected in absolute scale by using the new activation results. The combined data form an excellent basis to derive a shape dependence of the E1 strength in the even Mo isotopes with increasing deviation from the N=50 neutron shell (i.e., with the impact of quadrupole deformation and triaxiality). The wide energy coverage of the data allows for a stringent assessment of the dipole sum rule and a test of a novel parametrization developed previously which is based on it. This parametrization for the electric dipole strength function in nuclei with A>80 deviates significantly from prescriptions generally used previously. In astrophysical network calculations it may help to quantify the role the p-process plays in cosmic nucleosynthesis. It also has impact on the accurate analysis of neutron capture data of importance for future nuclear energy systems and waste transmutation.

  14. A Closure Study of Total Scattering Using Airborne In Situ Measurements from the Winter Phase of TCAP

    DOE PAGES

    Kassianov, Evgueni; Berg, Larry; Pekour, Mikhail; ...

    2018-06-12

    We examine the performance of our approach for calculating the total scattering coefficient of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. Our extended examination involves airborne in situ data collected by the U.S. Department of Energy’s (DOE) Gulf Stream 1 aircraft during winter over Cape Cod and the western North Atlantic Ocean as part of the Two-Column Aerosol Project (TCAP). The particle population represented by the winter dataset, in contrast with its summer counterpart, contains more hygroscopic particles and particles with an enhanced ability to absorb sunlight due to the larger fraction of black carbon. Moreover,more » the winter observations are characterized by more frequent clouds and a larger fraction of super-micron particles. We calculate model total scattering coefficient at ambient conditions using size spectra measured by optical particle counters (OPCs) and ambient complex refractive index (RI) estimated from measured chemical composition and relative humidity (RH). We demonstrate that reasonable agreement (~20% on average) between the observed and calculated scattering can be obtained under subsaturated ambient conditions (RH < 80%) by applying both screening for clouds and chemical composition data for the RI-based correction of the OPC-derived size spectra.« less

  15. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  16. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    PubMed

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  17. A Closure Study of Total Scattering Using Airborne In Situ Measurements from the Winter Phase of TCAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassianov, Evgueni; Berg, Larry; Pekour, Mikhail

    We examine the performance of our approach for calculating the total scattering coefficient of both non-absorbing and absorbing aerosol at ambient conditions from aircraft data. Our extended examination involves airborne in situ data collected by the U.S. Department of Energy’s (DOE) Gulf Stream 1 aircraft during winter over Cape Cod and the western North Atlantic Ocean as part of the Two-Column Aerosol Project (TCAP). The particle population represented by the winter dataset, in contrast with its summer counterpart, contains more hygroscopic particles and particles with an enhanced ability to absorb sunlight due to the larger fraction of black carbon. Moreover,more » the winter observations are characterized by more frequent clouds and a larger fraction of super-micron particles. We calculate model total scattering coefficient at ambient conditions using size spectra measured by optical particle counters (OPCs) and ambient complex refractive index (RI) estimated from measured chemical composition and relative humidity (RH). We demonstrate that reasonable agreement (~20% on average) between the observed and calculated scattering can be obtained under subsaturated ambient conditions (RH < 80%) by applying both screening for clouds and chemical composition data for the RI-based correction of the OPC-derived size spectra.« less

  18. Disorder dependence electron phonon scattering rate of V82Pd18 - xFex alloys at low temperature

    NASA Astrophysics Data System (ADS)

    Jana, R. N.; Meikap, A. K.

    2018-04-01

    We have systematically investigated the disorder dependence electron phonon scattering rate in three dimensional disordered V82Pd18 - xFex alloys. A minimum in temperature dependence resistivity curve has been observed at low temperature T =Tm. In the temperature range 5 K ≤ T ≤Tm the resistivity correction follows ρo 5 / 2T 1 / 2 law. The dephasing scattering time has been calculated from analysis of magnetoresistivity by weak localization theory. The electron dephasing time is dominated by electron-phonon scattering and follows anomalous temperature (T) and disorder (ρ0) dependence behaviour like τe-ph-1 ∝T2 /ρ0, where ρ0 is the impurity resistivity. The magnitude of the saturated dephasing scattering time (τ0) at zero temperature decreases with increasing disorder of the samples. Such anomalous behaviour of dephasing scattering rate is still unresolved.

  19. The effect of relativistic Compton scattering on thermonuclear burn of pure deuterium fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghasemizad, A.; Nazirzadeh, M.; Khanbabaei, B.

    The relativistic effects of the Compton scattering on the thermonuclear burn-up of pure deuterium fuel in non-equilibrium plasma have been studied by four temperature (4T) theory. In the limit of low electron temperatures and photon energies, the nonrelativistic Compton scattering is valid and a convenient approximation, but in the high energy exchange rates between electrons and photons, is seen to break down. The deficiencies of the nonrelativistic approximation can be overcome by using the relativistic correction in the photons kinetic equation. In this research, we have utilized the four temperature (4T) theory to calculate the critical burn-up parameter for puremore » deuterium fuel, while the Compton scattering is considered as a relativistic phenomenon. It was shown that the measured critical burn-up parameter in ignition with relativistic Compton scattering is smaller than that of the parameter in the ignition with the nonrelativistic Compton scattering.« less

  20. Holographic corrections to the Veneziano amplitude

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-08-01

    We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.

Top