NASA Astrophysics Data System (ADS)
Liu, Xinming; Shaw, Chris C.; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C.; Kappadath, S. Cheenu
2006-03-01
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images. Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Source distribution dependent scatter correction for PVI
Barney, J.S.; Harrop, R.; Dykstra, C.J. . School of Computing Science TRIUMF, Vancouver, British Columbia )
1993-08-01
Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule.
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728
Iterative scatter correction based on artifact assessment
NASA Astrophysics Data System (ADS)
Wiegert, Jens; Hohmann, Steffen; Bertram, Matthias
2008-03-01
In this paper we propose a novel scatter correction methodology for X-ray based cone-beam CT that allows to combine the advantages of projection-based and volume-based correction approaches. The basic idea is to use a potentially non-optimal projection-based scatter correction method and to iteratively optimize its performance by repeatedly assessing remaining scatter-induced artifacts in intermediately reconstructed volumes. The novel approach exploits the fact that due to the flatness of the scatter-background, compensation itself is most easily performed in the projection-domain, while the scatter-induced artifacts can be better observed in the reconstructed volume. The presented method foresees to evaluate the scatter correction efficiency after each iteration by means of a quantitative measure characterizing the amount of residual cupping and to adjust the parameters of the projection-based scatter correction for the next iteration accordingly. The potential of this iterative scatter correction approach is demonstrated using voxelized Monte Carlo scatter simulations as ground truth. Using the proposed iterative scatter correction method, remarkable scatter correction performance was achieved both using simple parametric heuristic techniques as well as by optimizing previously published scatter estimation schemes. For the human head, scatter induced artifacts were reduced from initially 148 HU to less than 8.1 HU to 9.1 HU for different studied methods, corresponding to an artifact reduction exceeding 93%.
Addition of noise by scatter correction methods in PVI
Barney, J.S. . Div. of Nuclear Medicine); Harrop, R.; Atkins, M.S. . School of Computing Science)
1994-08-01
Effective scatter correction techniques are required to account for errors due to high scatter fraction seen in positron volume imaging (PVI). To be effective, the correction techniques must be accurate and practical, but they also must not add excessively to the statistical noise in the image. The authors have investigated the noise added by three correction methods: a convolution/subtraction method; a method that interpolates the scatter from the events outside the object; and a dual energy window method with and without smoothing of the scatter estimate. The methods were applied to data generated by Monte Carlo simulation to determine their effect on the variance of the corrected projections. The convolution and interpolation methods did not add significantly to the variance. The dual energy window subtraction method without smoothing increased the variance by a factor of more than twelve, but this factor was improved to 1.2 by smoothing the scatter estimate.
Asymmetric scatter kernels for software-based scatter correction of gridless mammography
NASA Astrophysics Data System (ADS)
Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh
2015-03-01
Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.
Scatter corrections for cone beam optical CT
NASA Astrophysics Data System (ADS)
Olding, Tim; Holmes, Oliver; Schreiner, L. John
2009-05-01
Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.
Comparison of scatter correction methods for CBCT
NASA Astrophysics Data System (ADS)
Suri, Roland E.; Virshup, Gary; Zurkirchen, Luis; Kaissl, Wolfgang
2006-03-01
In contrast to the narrow fan of clinical Computed Tomography (CT) scanners, Cone Beam scanners irradiate a much larger proportion of the object, which causes additional X-ray scattering. The most obvious scatter artefact is that the middle area of the object becomes darker than the outer area, as the density in the middle of the object is underestimated (cupping). Methods for estimating scatter were investigated that can be applied to each single projection without requiring a preliminary reconstruction. Scatter reduction by the Uniform Scatter Fraction method was implemented in the Varian CBCT software version 2.0. This scatter correction method is recommended for full fan scans using air norm. However, this method did not sufficiently correct artefacts in half fan scans and was not sufficiently robust if used in combination with a Single Norm. Therefore, a physical scatter model was developed that estimates scatter for each projection using the attenuation profile of the object. This model relied on laboratory experiments in which scatter kernels were measured for Plexiglas plates of varying thicknesses. Preliminary results suggest that this kernel model may solve the shortcomings of the Uniform Scatter Fraction model.
Model based scatter correction for cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Wiegert, Jens; Bertram, Matthias; Rose, Georg; Aach, Til
2005-04-01
Scattered radiation is a major source of image degradation and nonlinearity in flat detector based cone-beam CT. Due to the bigger irradiated volume the amount of scattered radiation in true cone-beam geometry is considerably higher than for fan beam CT. This on the one hand reduces the signal to noise ratio, since the additional scattered photons contribute only to the noise and not to the measured signal, and on the other hand cupping and streak artifacts arise in the reconstructed volume. Anti-scatter grids composed of lead lamellae and interspacing material decrease the SNR for flat detector based CB-CT geometry, because the beneficial scatter attenuating effect is overcompensated by the absorption of primary radiation. Additionally, due to the high amount of scatter that still remains behind the grid, cupping and streak artifacts cannot be reduced sufficiently. Computerized scatter correction schemes are therefore essential for achieving artifact-free reconstructed images in cone-beam CT. In this work, a fast model based scatter correction algorithm is proposed, aiming at accurately estimating the level and spatial distribution of scattered radiation background in each projection. This will allow for effectively reducing streak and cupping artifacts due to scattering in cone-beam CT applications.
Finite volume corrections to pi pi scattering
Sato, Ikuro; Bedaque, Paulo F.; Walker-Loud, Andre
2006-01-13
Lattice QCD studies of hadron-hadron interactions are performed by computing the energy levels of the system in a finite box. The shifts in energy levels proportional to inverse powers of the volume are related to scattering parameters in a model independent way. In addition, there are non-universal exponentially suppressed corrections that distort this relation. These terms are proportional to e-m{sub pi} L and become relevant as the chiral limit is approached. In this paper we report on a one-loop chiral perturbation theory calculation of the leading exponential corrections in the case of I=2 pi pi scattering near threshold.
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Quadratic electroweak corrections for polarized Moller scattering
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
Novel scatter compensation of list-mode PET data using spatial and energy dependent corrections.
Guérin, Bastien; El Fakhri, Georges
2011-03-01
With the widespread use of positron emission tomography (PET) crystals with greatly improved energy resolution (e.g., 11.5% with LYSO as compared to 20% with BGO) and of list-mode acquisitions, the use of the energy of individual events in scatter correction schemes becomes feasible. We propose a novel scatter approach that incorporates the energy of individual photons in the scatter correction and reconstruction of list-mode PET data in addition to the spatial information presently used in clinical scanners. First, we rewrite the Poisson likelihood function of list-mode PET data including the energy distributions of primary and scatter coincidences and show that this expression yields an MLEM reconstruction algorithm containing both energy and spatial dependent corrections. To estimate the spatial distribution of scatter coincidences we use the single scatter simulation (SSS). Next, we derive two new formulae which allow estimation of the 2-D (coincidences) energy probability density functions (E-PDF) of primary and scatter coincidences from the 1-D (photons) E-PDFs associated with each photon. We also describe an accurate and robust object-specific method for estimating these 1-D E-PDFs based on a decomposition of the total energy spectra detected across the scanner into primary and scattered components. Finally, we show that the energy information can be used to accurately normalize the scatter sinogram to the data. We compared the performance of this novel scatter correction incorporating both the position and energy of detected coincidences to that of the traditional approach modeling only the spatial distribution of scatter coincidences in 3-D Monte Carlo simulations of a medium cylindrical phantom and a large, nonuniform NCAT phantom. Incorporating the energy information in the scatter correction decreased bias in the activity distribution estimation by ~20% and ~40% in the cold regions of the large NCAT phantom at energy resolutions 11.5% and 20% at 511 ke
Practical correction procedures for elastic electron scattering effects in ARXPS
NASA Astrophysics Data System (ADS)
Lassen, T. S.; Tougaard, S.; Jablonski, A.
2001-06-01
Angle-resolved XPS and AES (ARXPS and ARAES) are widely used for determination of the in-depth distribution of elements in the surface region of solids. It is well known that elastic electron scattering has a significant effect on the intensity as a function of emission angle and that this has a great influence on the determined overlayer thicknesses by this method. However the applied procedures for ARXPS and ARAES generally neglect this because no simple and practical procedure for correction has been available. However recently, new algorithms have been suggested. In this paper, we have studied the efficiency of these algorithms to correct for elastic scattering effects in the interpretation of ARXPS and ARAES. This is done by first calculating electron distributions by Monte Carlo simulations for well-defined overlayer/substrate systems and then to apply the different algorithms. We have found that an analytical formula based on a solution of the Boltzmann transport equation provides a good account for elastic scattering effects. However this procedure is computationally very slow and the underlying algorithm is complicated. Another much simpler algorithm, proposed by Nefedov and coworkers, was also tested. Three different ways of handling the scattering parameters within this model were tested and it was found that this algorithm also gives a good description for elastic scattering effects provided that it is slightly modified so that it takes into account the differences in the transport properties of the substrate and the overlayer. This procedure is fairly simple and is described in detail. The model gives a much more accurate description compared to the traditional straight-line approximation (SLA). However it is also found that when attenuation lengths instead of inelastic mean free paths are used in the simple SLA formalism, the effects of elastic scattering are also reasonably well accounted for. Specifically, from a systematic study of several
Accurate source location from P waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.
2015-12-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.
Accurate source location from waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei
2016-06-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.
Quantitative fully 3D PET via model-based scatter correction
Ollinger, J.M.
1994-05-01
We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient
Potential of software-based scatter corrections in cone-beam volume CT
NASA Astrophysics Data System (ADS)
Bertram, Matthias; Wiegert, Jens; Rose, Georg
2005-04-01
This study deals with a systematic assessment of the potential of different schemes for computerized scatter correction in flat detector based cone-beam X-ray computed tomography. The analysis is based on simulated scatter of a CT image of a human head. Using a Monte-Carlo cone-beam CT simulator, the spatial distribution of scattered radiation produced by this object has been calculated with high accuracy for the different projected views of a circular tomographic scan. Using this data and, as a reference, a scatter-free forward projection of the phantom, the potential of different schemes for scatter correction has been evaluated. In particular, the ideally achievable degree of accuracy of schemes based on estimating a constant scatter level in each projection was compared to approaches aiming at estimation of a more complex spatial shape of the scatter distribution. For each scheme, remaining cupping artifacts in the reconstructed volumetric image were quantified and analyzed. It was found that already accurate estimation of a constant scatter level for each projection allows for comparatively accurate compensation of scatter-caused artifacts.
Low dose scatter correction for digital chest tomosynthesis
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2015-03-01
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna
2010-01-01
With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420
Solving outside-axial-field-of-view scatter correction problem in PET via digital experimentation
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Zhu, Yang-Ming; Ye, Jinghan; Song, Xiyun; Hu, Zhiqiang
2016-03-01
Unaccounted scatter impact from unknown outside-axial-field-of-view (outside-AFOV) activity in PET is an important degrading factor for image quality and quantitation. Resource consuming and unpopular way to account for the outside- AFOV activity is to perform an additional PET/CT scan of adjacent regions. In this work we investigate a solution to the outside-AFOV scatter problem without performing a PET/CT scan of the adjacent regions. The main motivation for the proposed method is that the measured random corrected prompt (RCP) sinogram in the background region surrounding the measured object contains only scattered events, originating from both inside- and outside-AFOV activity. In this method, the scatter correction simulation searches through many randomly-chosen outside-AFOV activity estimates along with known inside-AFOV activity, generating a plethora of scatter distribution sinograms. This digital experimentation iterates until a decent match is found between a simulated scatter sinogram (that include supposed outside-AFOV activity) and the measured RCP sinogram in the background region. The combined scatter impact from inside- and outside-AFOV activity can then be used for scatter correction during final image reconstruction phase. Preliminary results using measured phantom data indicate successful phantom length estimate with the method, and, therefore, accurate outside-AFOV scatter estimate.
A patient-specific scatter artifacts correction method
NASA Astrophysics Data System (ADS)
Zhao, Wei; Brunner, Stephen; Niu, Kai; Schafer, Sebastian; Royalty, Kevin; Chen, Guang-Hong
2014-03-01
This paper provides a fast and patient-specific scatter artifact correction method for cone-beam computed tomography (CBCT) used in image-guided interventional procedures. Due to increased irradiated volume of interest in CBCT imaging, scatter radiation has increased dramatically compared to 2D imaging, leading to a degradation of image quality. In this study, we propose a scatter artifact correction strategy using an analytical convolution-based model whose free parameters are estimated using a rough estimation of scatter profiles from the acquired cone-beam projections. It was evaluated using Monte Carlo simulations with both monochromatic and polychromatic X-ray sources. The results demonstrated that the proposed method significantly reduced the scatter-induced shading artifacts and recovered CT numbers.
Accurate and approximate calculations of Raman scattering in the atmosphere of Neptune
NASA Astrophysics Data System (ADS)
Sromovsky, L. A.
2005-01-01
Raman scattering by H 2 in Neptune's atmosphere has significant effects on its reflectivity for λ<0.5 μm, producing baseline decreases of ˜20% in a clear atmosphere and ˜10% in a hazy atmosphere. However, few accurate Raman calculations are carried out because of their complexity and computational costs. Here we present the first radiation transfer algorithm that includes both polarization and Raman scattering and facilitates computation of spatially resolved spectra. New calculations show that Cochran and Trafton's (1978, Astrophys. J. 219, 756-762) suggestion that light reflected in the deep CH 4 bands is mainly Raman scattered is not valid for current estimates of the CH 4 vertical distribution, which implies only a 4% Raman contribution. Comparisons with IUE, HST, and groundbased observations confirm that high altitude haze absorption is reducing Neptune's geometric albedo by ˜6% in the 0.22-0.26 μm range and by ˜13% in the 0.35-0.45 μm range. A sample haze model with 0.2 optical depths of 0.2-μm radius particles between 0.1 and 0.8 bars fits reasonably well, but is not a unique solution. We used accurate calculations to evaluate several approximations of Raman scattering. The Karkoschka (1994, Icarus 111, 174-192) method of applying Raman corrections to calculated spectra and removing Raman effects from observed spectra is shown to have limited applicability and to undercorrect the depths of weak CH 4 absorption bands. The relatively large Q-branch contribution observed by Karkoschka is shown to be consistent with current estimates of Raman cross-sections. The Wallace (1972, Astrophys. J. 176, 249-257) approximation, produces geometric albedo ˜5% low as originally proposed, but can be made much more accurate by including a scattering contribution from the vibrational transition. The original Pollack et al. (1986, Icarus 65, 442-466) approximation is inaccurate and unstable, but can be greatly improved by several simple modifications. A new
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-10-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic
Mie scatter corrections in single cell infrared microspectroscopy.
Konevskikh, Tatiana; Lukacs, Rozalia; Blümel, Reinhold; Ponossov, Arkadi; Kohler, Achim
2016-06-23
Strong Mie scattering signatures hamper the chemical interpretation and multivariate analysis of the infrared microscopy spectra of single cells and tissues. During recent years, several numerical Mie scatter correction algorithms for the infrared spectroscopy of single cells have been published. In the paper at hand, we critically reviewed existing algorithms for the correction of Mie scattering and suggest improvements. We developed an iterative algorithm based on Extended Multiplicative Scatter Correction (EMSC), for the retrieval of pure absorbance spectra from highly distorted infrared spectra of single cells. The new algorithm uses the van de Hulst approximation formula for the extinction efficiency employing a complex refractive index. The iterative algorithm involves the establishment of an EMSC meta-model. While existing iterative algorithms for the correction of resonant Mie scattering employ three independent parameters for establishing a meta-model, we could decrease the number of parameters from three to two independent parameters, which reduced the calculation time for the Mie scattering curves for the iterative EMSC meta-model by a factor of 10. Moreover, by employing the Hilbert transform for evaluating the Kramers-Kronig relations based on a FFT algorithm in Matlab, we further improved the speed of the algorithm by a factor of 100. For testing the algorithm we simulate distorted apparent absorbance spectra by utilizing the exact theory for the scattering of infrared light at absorbing spheres, taking into account the high numerical aperture of infrared microscopes employed for the analysis of single cells and tissues. In addition, the algorithm was applied to measured absorbance spectra of single lung cancer cells. PMID:27034998
Mie scatter corrections in single cell infrared microspectroscopy.
Konevskikh, Tatiana; Lukacs, Rozalia; Blümel, Reinhold; Ponossov, Arkadi; Kohler, Achim
2016-06-23
Strong Mie scattering signatures hamper the chemical interpretation and multivariate analysis of the infrared microscopy spectra of single cells and tissues. During recent years, several numerical Mie scatter correction algorithms for the infrared spectroscopy of single cells have been published. In the paper at hand, we critically reviewed existing algorithms for the correction of Mie scattering and suggest improvements. We developed an iterative algorithm based on Extended Multiplicative Scatter Correction (EMSC), for the retrieval of pure absorbance spectra from highly distorted infrared spectra of single cells. The new algorithm uses the van de Hulst approximation formula for the extinction efficiency employing a complex refractive index. The iterative algorithm involves the establishment of an EMSC meta-model. While existing iterative algorithms for the correction of resonant Mie scattering employ three independent parameters for establishing a meta-model, we could decrease the number of parameters from three to two independent parameters, which reduced the calculation time for the Mie scattering curves for the iterative EMSC meta-model by a factor of 10. Moreover, by employing the Hilbert transform for evaluating the Kramers-Kronig relations based on a FFT algorithm in Matlab, we further improved the speed of the algorithm by a factor of 100. For testing the algorithm we simulate distorted apparent absorbance spectra by utilizing the exact theory for the scattering of infrared light at absorbing spheres, taking into account the high numerical aperture of infrared microscopes employed for the analysis of single cells and tissues. In addition, the algorithm was applied to measured absorbance spectra of single lung cancer cells.
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
Quantum error correction of photon-scattering errors
NASA Astrophysics Data System (ADS)
Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee
2011-05-01
Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel; Demorest, Paul; Jones, Glenn E-mail: maura.mclaughlin@mail.wvu.edu E-mail: pdemores@nrao.edu
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.
Correcting for Interstellar Scattering Delay in High-precision Pulsar Timing: Simulation Results
NASA Astrophysics Data System (ADS)
Palliyaguru, Nipuni; Stinebring, Daniel; McLaughlin, Maura; Demorest, Paul; Jones, Glenn
2015-12-01
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.
Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU
NASA Astrophysics Data System (ADS)
Kim, Kyung Sang; Ye, Jong Chul
2011-08-01
Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125× and 141× for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.
Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU.
Kim, Kyung Sang; Ye, Jong Chul
2011-08-01
Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125× and 141× for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.
X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library
Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis
2014-01-01
Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
NASA Astrophysics Data System (ADS)
Konik, Arda Bekir
) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.
Kokhanovsky, Alexander A
2007-04-01
Analytical equations for the diffused scattered light correction factor of Sun photometers are derived and analyzed. It is shown that corrections are weakly dependent on the atmospheric optical thickness. They are influenced mostly by the size of aerosol particles encountered by sunlight on its way to a Sun photometer. In addition, the accuracy of the small-angle approximation used in the work is studied with numerical calculations based on the exact radiative transfer equation.
Bootsma, G. J.; Verhaegen, F.; Jaffray, D. A.
2015-01-15
suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y. |; Eberl, S.; Nakamura, T.
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Corrections to the eikonal approximation for nuclear scattering at medium energies
NASA Astrophysics Data System (ADS)
Buuck, Micah; Miller, Gerald A.
2014-08-01
The upcoming Facility for Rare Isotope Beams (FRIB) at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University has reemphasized the importance of accurate modeling of low energy nucleus-nucleus scattering. Such calculations have been simplified by using the eikonal approximation. As a high energy approximation, however, its accuracy suffers for the medium energy beams that are of current experimental interest. A prescription developed by Wallace [Phys. Rev. Lett. 27, 622 (1971), 10.1103/PhysRevLett.27.622 and Ann. Phys. (NY) 78, 190 (1973), 10.1016/0003-4916(73)90008-0] that obtains the scattering propagator as an expansion around the eikonal propagator (Glauber approach) has the potential to extend the range of validity of the approximation to lower energies. Here we examine the properties of this expansion, and calculate the first-, second-, and third-order corrections for the scattering of a spinless particle off of a Ca40 nucleus, and for nuclear breakup reactions involving Be11. We find that, including these corrections extends the lower bound of the range of validity down to energies as low as about 45 MeV. At that energy the corrections provide as much as a 15% correction to certain processes.
Multiple-scattering corrections to the Beer-Lambert law
Zardecki, A.
1983-01-01
The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled.
Electroweak radiative corrections to polarized Mo/ller scattering asymmetries
NASA Astrophysics Data System (ADS)
Czarnecki, Andrzej; Marciano, William J.
1996-02-01
One loop electroweak radiative corrections to left-right parity-violating Mo/ller scattering (e-e--->e-e-) asymmetries are presented. They reduce the standard model (tree level) prediction by 40+/-3% where the main shift and uncertainty stem from hadronic vacuum polarization loops. A similar reduction also occurs for the electron-electron atomic parity-violating interaction. That effect can be attributed to an increase of sin2θW(q2) by 3% in running from q2=m2Z to 0. The sensitivity of the asymmetry to ``new physics'' is also discussed.
Correction for patient table-induced scattered radiation in cone-beam computed tomography (CBCT)
Sun Mingshan; Nagy, Tamas; Virshup, Gary; Partain, Larry; Oelhafen, Markus; Star-Lack, Josh
2011-04-15
Purpose: In image-guided radiotherapy, an artifact typically seen in axial slices of x-ray cone-beam computed tomography (CBCT) reconstructions is a dark region or ''black hole'' situated below the scan isocenter. The authors trace the cause of the artifact to scattered radiation produced by radiotherapy patient tabletops and show it is linked to the use of the offset-detector acquisition mode to enlarge the imaging field-of-view. The authors present a hybrid scatter kernel superposition (SKS) algorithm to correct for scatter from both the object-of-interest and the tabletop. Methods: Monte Carlo simulations and phantom experiments were first performed to identify the source of the black hole artifact. For correction, a SKS algorithm was developed that uses separate kernels to estimate scatter from the patient tabletop and the object-of-interest. Each projection is divided into two regions, one defined by the shadow cast by the tabletop on the imager and one defined by the unshadowed region. The region not shadowed by the tabletop is processed using the recently developed fast adaptive scatter kernel superposition (fASKS) method which employs asymmetric kernels that best model scatter transport through bodylike objects. The shadowed region is convolved with a combination of slab-derived symmetric SKS kernels and asymmetric fASKS kernels. The composition of the hybrid kernels is projection-angle-dependent. To test the algorithm, pelvis phantom and in vivo data were acquired using a CBCT test stand, a Varian Acuity simulator, and a Varian On-Board Imager, all of which have similar geometries and components. Artifact intensities and Hounsfield unit (HU) accuracies in the reconstructions were assessed before and after the correction. Results: The hybrid kernel algorithm provided effective correction and produced substantially better scatter estimates than the symmetric SKS or asymmetric fASKS methods alone. HU nonuniformities in the reconstructed pelvis phantom were
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-01
We perform updated and detailed calculations of the complete next-to-leading order set of electroweak radiative corrections to parity-violating e{sup -}e{sup -}{yields}e{sup -}e{sup -}({gamma}) scattering asymmetries at energies relevant for the ultraprecise Moeller experiment to be performed at JLab. Our numerical results are presented for a range of experimental cuts and the relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from nonphysical parameters and show them to be valid for fast, yet accurate estimations.
NASA Technical Reports Server (NTRS)
Lock, James A.; Hovenac, Edward A.
1989-01-01
A correction algorithm for evaluating the particle size distribution measurements of atmospheric aerosols obtained with a forward-scattering spectrometer probe (FSSP) is examined. A model based on Poisson statistics is employed to calculate the average diameter and rms width of the particle size distribution. The dead time and coincidence errors in the measured number density are estimated. The model generated data are compared with a Monte Carlo simulation of the FSSP operation. It is observed that the correlation between the actual and measured size distribution is nonlinear. It is noted that the algorithm permits more accurate calculation of the average diameter and rms width of the distribution compared to uncorrected measured quantities.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
NASA Astrophysics Data System (ADS)
Robinson, Andrew P.; Tipping, Jill; Cullen, David M.; Hamilton, David
2016-07-01
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for {{}1 7 7} Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities (≤slant 200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Patient-specific scatter correction for flat-panel detector-based cone-beam CT imaging
NASA Astrophysics Data System (ADS)
Zhao, Wei; Brunner, Stephen; Niu, Kai; Schafer, Sebastian; Royalty, Kevin; Chen, Guang-Hong
2015-02-01
A patient-specific scatter correction algorithm is proposed to mitigate scatter artefacts in cone-beam CT (CBCT). The approach belongs to the category of convolution-based methods in which a scatter potential function is convolved with a convolution kernel to estimate the scatter profile. A key step in this method is to determine the free parameters introduced in both scatter potential and convolution kernel using a so-called calibration process, which is to seek for the optimal parameters such that the models for both scatter potential and convolution kernel is able to optimally fit the previously known coarse estimates of scatter profiles of the image object. Both direct measurements and Monte Carlo (MC) simulations have been proposed by other investigators to achieve the aforementioned rough estimates. In the present paper, a novel method has been proposed and validated to generate the needed coarse scatter profile for parameter calibration in the convolution method. The method is based upon an image segmentation of the scatter contaminated CBCT image volume, followed by a reprojection of the segmented image volume using a given x-ray spectrum. The reprojected data is subtracted from the scatter contaminated projection data to generate a coarse estimate of the needed scatter profile used in parameter calibration. The method was qualitatively and quantitatively evaluated using numerical simulations and experimental CBCT data acquired on a clinical CBCT imaging system. Results show that the proposed algorithm can significantly reduce scatter artefacts and recover the correct CT number. Numerical simulation results show the method is patient specific, can accurately estimate the scatter, and is robust with respect to segmentation procedure. For experimental and in vivo human data, the results show the CT number can be successfully recovered and anatomical structure visibility can be significantly improved.
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.
1990-01-01
The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.
Flax, S W; O'Donnell, M
1988-01-01
Methods for correction of phase aberrations induced by near-field variations in the index of refraction are explored. Using signals obtained from a sampled aperture (i.e. transducer array), phase aberrations can be accurately measured with a correlation approach similar to methods used in adaptive optics and radar. However, the method presented here has no need for a beacon or an ideal point reflector to act as a source for estimating phase errors. It uses signals from random collections of scatterers to determine phase aberrations accurately. Because there is no longer a need for a beacon signal, the method is directly applicable not only to medical ultrasound imaging but also to any coherent imaging system with a sampled aperture, such as radar and sonar.
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
One-loop Electroweak Radiative Corrections for Polarized Møller Scattering
NASA Astrophysics Data System (ADS)
Barkanova, Svetlana; Aleksejevs, Aleksandrs; Ilyichev, Alexander; Kolomensky, Yury; Zykunov, Vladimir
2011-04-01
Møller scattering measurements are a clean, powerful probe of new physics effects. However, before physics of interest can be extracted from the experimental data, radiative corrections must be taken into account very carefully. Using two different approaches, we perform updated and detailed calculations of the complete one-loop set of electroweak radiative corrections to parity violating electron-electron scattering asymmetry at low energies relevant for the ultra-precise 11 GeV MOLLER experiment planned at JLab. Although contributions from some of the self-energies and vertex diagrams calculated in the two approaches can differ significantly, our full gauge-invariant set still guarantees that the total relative weak corrections are in excellent agreement for the two methods of calculation. Our numerical results are presented for a range of experimental cuts and the relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Semenov, Alexander; Babikov, Dmitri
2014-01-16
For computational treatment of rotationally inelastic scattering of molecules, we propose to use the mixed quantum/classical theory, MQCT. The old idea of treating translational motion classically, while quantum mechanics is used for rotational degrees of freedom, is developed to the new level and is applied to Na + N2 collisions in a broad range of energies. Comparison with full-quantum calculations shows that MQCT accurately reproduces all, even minor, features of energy dependence of cross sections, except scattering resonances at very low energies. The remarkable success of MQCT opens up wide opportunities for computational predictions of inelastic scattering cross sections at higher temperatures and/or for polyatomic molecules and heavier quenchers, which is computationally close to impossible within the full-quantum framework.
Monte-Carlo scatter correction for cone-beam computed tomography with limited scan field-of-view
NASA Astrophysics Data System (ADS)
Bertram, Matthias; Sattel, Timo; Hohmann, Steffen; Wiegert, Jens
2008-03-01
In flat detector cone-beam computed tomography (CBCT), scattered radiation is a major source of image degradation, making accurate a posteriori scatter correction inevitable. A potential solution to this problem is provided by computerized scatter correction based on Monte-Carlo simulations. Using this technique, the detected distributions of X-ray scatter are estimated for various viewing directions using Monte-Carlo simulations of an intermediate reconstruction. However, as a major drawback, for standard CBCT geometries and with standard size flat detectors such as mounted on interventional C-arms, the scan field of view is too small to accommodate the human body without lateral truncations, and thus this technique cannot be readily applied. In this work, we present a novel method for constructing a model of the object in a laterally and possibly also axially extended field of view, which enables meaningful application of Monte-Carlo based scatter correction even in case of heavy truncations. Evaluation is based on simulations of a clinical CT data set of a human abdomen, which strongly exceeds the field of view of the simulated C-arm based CBCT imaging geometry. By using the proposed methodology, almost complete removal of scatter-caused inhomogeneities is demonstrated in reconstructed images.
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Gillen, Rebecca; Firbank, Michael J; Lloyd, Jim; O'Brien, John T
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer's Disease (n = 38), Dementia with Lewy Bodies (n = 29) or healthy normal controls (n = 30), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject's CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used.We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery
Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.
2009-01-01
Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output. PMID:19610306
NASA Astrophysics Data System (ADS)
Chen, Jingyi; Zebker, Howard A.; Knight, Rosemary
2015-11-01
Interferometric synthetic aperture radar (InSAR) is a radar remote sensing technique for measuring surface deformation to millimeter-level accuracy at meter-scale resolution. Obtaining accurate deformation measurements in agricultural regions is difficult because the signal is often decorrelated due to vegetation growth. We present here a new algorithm for retrieving InSAR deformation measurements over areas with severe vegetation decorrelation using adaptive phase interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. We apply this algorithm to L-band ALOS interferograms acquired over the San Luis Valley, Colorado, and the Tulare Basin, California. In both areas, the pumping of groundwater for irrigation results in deformation of the land that can be detected using InSAR. We show that the PS-based algorithm can significantly reduce the artifacts due to vegetation decorrelation while preserving the deformation signature.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
NASA Astrophysics Data System (ADS)
Brunner, Stephen; Nett, Brian E.; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-01
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Scatter correction method for cone-beam CT based on interlacing-slit scan
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Zhang, Hua; Shi, Yi-Kai; Zhang, Liang; Xu, Zhe
2014-09-01
Cone-beam computed tomography (CBCT) has the notable features of high efficiency and high precision, and is widely used in areas such as medical imaging and industrial non-destructive testing. However, the presence of the ray scatter reduces the quality of CT images. By referencing the slit collimation approach, a scatter correction method for CBCT based on the interlacing-slit scan is proposed. Firstly, according to the characteristics of CBCT imaging, a scatter suppression plate with interlacing slits is designed and fabricated. Then the imaging of the scatter suppression plate is analyzed, and a scatter correction calculation method for CBCT based on the image fusion is proposed, which can splice out a complete set of scatter suppression projection images according to the interlacing-slit projection images of the left and the right imaging regions in the scatter suppression plate, and simultaneously complete the scatter correction within the flat panel detector (FPD). Finally, the overall process of scatter suppression and correction is provided. The experimental results show that this method can significantly improve the clarity of the slice images and achieve a good scatter correction.
Hong Xinguo; Hao Quan
2009-01-15
In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.
Characterization of image quality for 3D scatter-corrected breast CT images
NASA Astrophysics Data System (ADS)
Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.
2011-03-01
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
NASA Astrophysics Data System (ADS)
Cheng, Ju-Chieh Kevin; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-04-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies. This work was supported by the Canadian Institute of Health Research, a TRIUMF Life Science Grant, the Natural Sciences and Engineering Research Council of Canada UFA (V Sossi) and the Michael Smith Foundation for Health Research Scholarship (V Sossi).
Rescattering corrections and self-consistent metric in planckian scattering
NASA Astrophysics Data System (ADS)
Ciafaloni, M.; Colferai, D.
2014-10-01
Starting from the ACV approach to transplanckian scattering, we present a development of the reduced-action model in which the (improved) eikonal representation is able to describe particles' motion at large scattering angle and, furthermore, UV-safe (regular) rescattering solutions are found and incorporated in the metric. The resulting particles' shock-waves undergo calculable trajectory shifts and time delays during the scattering process — which turns out to be consistently described by both action and metric, up to relative order R 2 /b 2 in the gravitational radius over impact parameter expansion. Some suggestions about the role and the (re)scattering properties of irregular solutions — not fully investigated here — are also presented.
Use of beam stoppers to correct random and scatter coincidence in PET: A Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chuang, Keh-Shih; Lu, Cheng-Chang; Ni, Yu-Ching; Jan, Meei-Ling
2013-05-01
3D acquisition of positron emission tomography (PET) produce data with improved signal-to-noise ratios compared with conventional 2D PET. However, the sensitivity increase is accompanied by an increase in the number of scattered photons and random coincidences detected. Scatter and random coincidence lead to a loss in image contrast and degrade the accuracy of quantitative analysis. This work examines the feasibility of using beam stoppers (BS) for correcting scatter and random coincidence simultaneously. The origins of the photons are not on the path of non-true event. Therefore, a BS placed on the line of response (LOR) that passes through the source position absorbs a particular fraction of the true events but has little effect on the scatter and random events. The subtraction of the two scanned data, with and without BS, can be employed to estimate the non-true events at the LOR. Monte Carlo (MC) simulations of 3D PET on an EEC phantom and a Zubal Phantom are conducted to validate the proposed approach. Both scattered and random coincidences can be estimated and corrected using the proposed method. The mean squared errors measured on the random+scatter sinogram of the phantom obtained by the proposed method are much less than those obtained using the conventional correction method (the delayed coincidence subtraction for random correction combined with single scatter simulation for scatter correction). Preliminary results indicate that the proposed method is feasible for clinical application.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Increasing the imaging depth through computational scattering correction (Conference Presentation)
NASA Astrophysics Data System (ADS)
Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis
2016-03-01
Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.
Experimental Scatter Correction Methods in Industrial X-Ray Cone-Beam CT
NASA Astrophysics Data System (ADS)
Schörner, K.; Goldammer, M.; Stephan, J.
2011-06-01
Scattered radiation presents a major source of image degradation in industrial cone-beam computed tomography systems. Scatter artifacts introduce streaks, cupping and a loss of contrast in the reconstructed CT-volumes. In order to overcome scatter artifacts, we present two complementary experimental correction methods: the beam-stop array (BSA) and an inverse technique we call beam-hole array (BHA). Both correction methods are examined in comparative measurements where it is shown that the aperture-based BHA technique has practical and scatter-reducing advantages over the BSA. The proposed BHA correction method is successfully applied to a large-scale industrial specimen whereby scatter artifacts are reduced and contrast is enhanced significantly.
NASA Technical Reports Server (NTRS)
Boughner, Robert E.
1986-01-01
A method for calculating the photodissociation rates needed for photochemical modeling of the stratosphere, which includes the effects of molecular scattering, is described. The procedure is based on Sokolov's method of averaging functional correction. The radiation model and approximations used to calculate the radiation field are examined. The approximated diffuse fields and photolysis rates are compared with exact data. It is observed that the approximate solutions differ from the exact result by 10 percent or less at altitudes above 15 km; the photolysis rates differ from the exact rates by less than 5 percent for altitudes above 10 km and all zenith angles, and by less than 1 percent for altitudes above 15 km.
Xu, Ninghan; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan
2013-09-01
Aspect ratio, width, and end-cap factor are three critical parameters defined to characterize the geometry of metallic nanorod (NR). In our previous work [Opt. Express 21, 2987 (2013)], we reported an optical extinction spectroscopic (OES) method that can measure the aspect ratio distribution of gold NR ensembles effectively and statistically. However, the measurement accuracy was found to depend on the estimate of the width and end-cap factor of the nanorod, which unfortunately cannot be determined by the OES method itself. In this work, we propose to improve the accuracy of the OES method by applying an auxiliary scattering measurement of the NR ensemble which can help to estimate the mean width of the gold NRs effectively. This so-called optical extinction/scattering spectroscopic (OESS) method can fast characterize the aspect ratio distribution as well as the mean width of gold NR ensembles simultaneously. By comparing with the transmission electron microscopy experimentally, the OESS method shows the advantage of determining two of the three critical parameters of the NR ensembles (i.e., the aspect ratio and the mean width) more accurately and conveniently than the OES method.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
Accurate assessment of mass, models and resolution by small-angle scattering
Rambo, Robert P.; Tainer, John A.
2013-01-01
Modern small angle scattering (SAS) experiments with X-rays or neutrons provide a comprehensive, resolution-limited observation of the thermodynamic state. However, methods for evaluating mass and validating SAS based models and resolution have been inadequate. Here, we define the volume-of-correlation, Vc: a SAS invariant derived from the scattered intensities that is specific to the structural state of the particle, yet independent of concentration and the requirements of a compact, folded particle. We show Vc defines a ratio, Qr, that determines the molecular mass of proteins or RNA ranging from 10 to 1,000 kDa. Furthermore, we propose a statistically robust method for assessing model-data agreements (X2free) akin to cross-validation. Our approach prevents over-fitting of the SAS data and can be used with a newly defined metric, Rsas, for quantitative evaluation of resolution. Together, these metrics (Vc, Qr, X2free, and Rsas) provide analytical tools for unbiased and accurate macromolecular structural characterizations in solution. PMID:23619693
NASA Astrophysics Data System (ADS)
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo
2015-11-15
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the
Scatter correction in scintillation camera imaging of positron-emitting radionuclides
Ljungberg, M.; Danfelter, M.; Strand, S.E.
1996-12-31
The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.
Ouyang, L; Yan, H; Jia, X; Jiang, S; Wang, J; Zhang, H
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated by Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.
Gerasimov, R. E. Fadin, V. S.
2015-01-15
An analysis of approximations used in calculations of radiative corrections to electron-proton scattering cross section is presented. We investigate the difference between the relatively recent Maximon and Tjon result and the Mo and Tsai result, which was used in the analysis of experimental data. We also discuss the proton form factors ratio dependence on the way we take into account radiative corrections.
Constrained {gamma}Z correction to parity-violating electron scattering
Hall, Nathan Luk; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-11-01
We update the calculation of {gamma}Z interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Constrained γZ correction to parity-violating electron scattering
Hall, N. L.; Thomas, A. W.; Young, R. D.; Blunden, P. G.; Melnitchouk, W.
2013-11-07
We update the calculation of γZ interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S.; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT.
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
Non-eikonal corrections for the scattering of spin-one particles
NASA Astrophysics Data System (ADS)
Gaber, M. W.; Wilkin, C.; Al-Khalili, J. S.
The Wallace Fourier-Bessel expansion of the scattering amplitude is generalised to the case of the scattering of a spin-one particle from a potential with a single tensor coupling as well as central and spin-orbit terms. A generating function for the eikonal-phase (quantum) corrections is evaluated in closed form. For medium-energy deuteron-nucleus scattering, the first-order correction is dominant and is shown to be significant in the interpretation of analysing power measurements. This conclusion is supported by a numerical comparison of the eikonal observables, evaluated with and without corrections, with those obtained from a numerical resolution of the Schrödinger equation for d-58Ni scattering at incident deuteron energies of 400 and 700 MeV.
Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2016-04-01
In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.
NASA Astrophysics Data System (ADS)
Schörner, K.; Goldammer, M.; Stephan, J.
2011-02-01
In industrial X-ray cone-beam computed tomography, the inspection of large-scale samples is important because of increasing demands on their quality and long-term mechanical resilience. Large-scale samples, for example made of aluminum or iron, are strongly scattering X-rays. Scattered radiation leads to artifacts such as cupping, streaks, and a reduction in contrast in the reconstructed CT-volume. We propose a scatter correction method based on sampling primary signals by employing a beam-hole array (BHA). In this indirect method, a scatter estimate is calculated by subtraction of the sampled primary signal from the total signal, the latter taken from an image where the BHA is absent. This technique is considered complementary to the better known beam-stop array (BSA) method. The two scatter estimation methods are compared here with respect to geometric effects, scatter-to-total ratio and practicability. Scatter estimation with the BHA method yields more accurate scatter estimates in off-centered regions, and a lower scatter-to-total ratio in critical image regions where the primary signal is very low. Scatter correction with the proposed BHA method is then applied to a ceramic specimen from power generation technologies. In the reconstructed CT volume, cupping almost completely vanishes and contrast is enhanced significantly.
Library-based scatter correction for dedicated cone beam breast CT: a feasibility study
NASA Astrophysics Data System (ADS)
Shi, Linxi; Vedantham, Srinivasan; Karellas, Andrew; Zhu, Lei
2016-04-01
Purpose: Scatter errors are detrimental to cone-beam breast CT (CBBCT) accuracy and obscure the visibility of calcifications and soft-tissue lesions. In this work, we propose practical yet effective scatter correction for CBBCT using a library-based method and investigate its feasibility via small-group patient studies. Method: Based on a simplified breast model with varying breast sizes, we generate a scatter library using Monte-Carlo (MC) simulation. Breasts are approximated as semi-ellipsoids with homogeneous glandular/adipose tissue mixture. On each patient CBBCT projection dataset, an initial estimate of scatter distribution is selected from the pre-computed scatter library by measuring the corresponding breast size on raw projections and the glandular fraction on a first-pass CBBCT reconstruction. Then the selected scatter distribution is modified by estimating the spatial translation of the breast between MC simulation and the clinical scan. Scatter correction is finally performed by subtracting the estimated scatter from raw projections. Results: On two sets of clinical patient CBBCT data with different breast sizes, the proposed method effectively reduces cupping artifact and improves the image contrast by an average factor of 2, with an efficient processing time of 200ms per conebeam projection. Conclusion: Compared with existing scatter correction approaches on CBBCT, the proposed library-based method is clinically advantageous in that it requires no additional scans or hardware modifications. As the MC simulations are pre-computed, our method achieves a high computational efficiency on each patient dataset. The library-based method has shown great promise as a practical tool for effective scatter correction on clinical CBBCT.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e^{–} e^{–} → e^{–} e^{–} (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e– e– → e– e– (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Mason, Philip E; Wernersson, Erik; Jungwirth, Pavel
2012-07-19
The carbonate ion plays a central role in the biochemical formation of the shells of aquatic life, which is an important path for carbon dioxide sequestration. Given the vital role of carbonate in this and other contexts, it is imperative to develop accurate models for such a high charge density ion. As a divalent ion, carbonate has a strong polarizing effect on surrounding water molecules. This raises the question whether it is possible to describe accurately such systems without including polarization. It has recently been suggested the lack of electronic polarization in nonpolarizable water models can be effectively compensated by introducing an electronic dielectric continuum, which is with respect to the forces between atoms equivalent to rescaling the ionic charges. Given how widely nonpolarizable models are used to model electrolyte solutions, establishing the experimental validity of this suggestion is imperative. Here, we examine a stringent test for such models: a comparison of the difference of the neutron scattering structure factors of K2CO3 vs KNO3 solutions and that predicted by molecular dynamics simulations for various models of the same systems. We compare standard nonpolarizable simulations in SPC/E water to analogous simulations with effective ion charges, as well as simulations in explicitly polarizable POL3 water (which, however, has only about half the experimental polarizability). It is found that the simulation with rescaled charges is in a very good agreement with the experimental data, which is significantly better than for the nonpolarizable simulation and even better than for the explicitly polarizable POL3 model.
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-12-15
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-01-01
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%–66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%–29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%–62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
Park, Yang-Kyun; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.
2015-01-01
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCTus) and a priori CT-based scatter correction (CBCTap). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCTus, while no HU change was applied to the CBCTap. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CTref) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCTap was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCTus images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CTref, while the CBCTap images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCTap-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms. PMID:26233175
A megavoltage scatter correction technique for cone-beam CT images acquired during VMAT delivery.
Boylan, C J; Marchant, T E; Stratford, J; Malik, J; Choudhury, A; Shrimali, R; Rodgers, J; Rowbottom, C G
2012-06-21
Kilovoltage cone-beam CT (kV CBCT) can be acquired during the delivery of volumetric modulated arc therapy (VMAT), in order to obtain an image of the patient during treatment. However, the quality of such CBCTs is degraded by megavoltage (MV) scatter from the treatment beam onto the imaging panel. The objective of this paper is to introduce a novel MV scatter correction method for simultaneous CBCT during VMAT, and to investigate its effectiveness when compared to other techniques. The correction requires the acquisition of a separate set of images taken during VMAT delivery, while the kV beam is off. These images--which contain only the MV scatter contribution on the imaging panel--are then used to correct the corresponding kV/MV projections. To test this method, CBCTs were taken of an image quality phantom during VMAT delivery and measurements of contrast to noise ratio were made. Additionally, the correction was applied to the datasets of three VMAT prostate patients, who also received simultaneous CBCTs. The clinical image quality was assessed using a validated scoring system, comparing standard CBCTs to the uncorrected simultaneous CBCTs and a variety of correction methods. Results show that the correction is able to recover some of the low and high-contrast signal to noise ratio lost due to MV scatter. From the patient study, the corrected CBCT scored significantly higher than the uncorrected images in terms of the ability to identify the boundary between the prostate and surrounding soft tissue. In summary, a simple MV scatter correction method has been developed and, using both phantom and patient data, is shown to improve the image quality of simultaneous CBCTs taken during VMAT delivery.
Characterization and correction for scatter in 3D PET using rebinned plane integrals
Wu, C.; Ordonez, C.E.; Chen, C.T. . Dept. of Radiology)
1994-12-01
The scatter characteristics of three-dimensional (3D) positron emission tomography (PET) in terms of the plane-integral scatter response function (SRF) are studied. To obtain the plane-integral SRF and study its properties, Monte Carlo simulations were carried out which generated coincidence events from point sources located at different positions in water-filled spheres of various sizes. In each simulation, the plane-integral SRF is obtained by rebinning the detected true and scatter events into two separate sets of plane integrals and then dividing the plane integrals of scatter events by the plane integral of true events of the plane in which the point source is located. A spherical PET scanner was assumed for these simulations. Examination of the SRF shows that the SRF in 3D PET can be modeled not by an exponential function as in the case of 2D PET, but by a Gaussian with its peak shifted away from the primary peak. Using this plane-integral SRF, a scatter correction method was developed for 3D PET that first converts an attenuation-corrected 3D PET data set into plane integrals, then obtains the scatter components in the rebinned plane integrals by integral transformation of the rebinned plane integrals with the SRF, and finally subtracts the scatter components from the rebinned plane integrals to yield the scatter-corrected plane integrals. From the scatter-corrected plane integrals, a 3D image was reconstructed by using a 3D filtered-backprojection algorithm. To test the method, a cylindrical PET scanner imaging an ellipsoid phantom with a 3-cm cold bar at the center was simulated, and 3D images of the phantom with and without scatter correction were reconstructed. Comparison of the two images shows that this method compensates reasonably well for scatter events. The advantages of the proposed method are that it treats the scatter in 3D PET in a truly 3D manner and that it is computationally efficient.
Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were
NASA Astrophysics Data System (ADS)
Mahesh, C.; Prakash, Satya; Sathiyamoorthy, V.; Gairola, R. M.
2011-11-01
An Artificial Neural Network (ANN) based technique is proposed for estimating precipitation over Indian land and oceanic regions [30° S - 40° N and 30° E - 120° E] using Scattering Index (SI) and Polarization Corrected Temperature (PCT) derived from Special Sensor Microwave Imager (SSM/I) measurements. This rainfall retrieval algorithm is designed to estimate rainfall using a combination of SSM/I and Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements. For training the ANN, SI and PCT (which signify rain signatures in a better way) calculated from SSM/I brightness temperature are considered as inputs and Precipitation Radar (PR) rain rate as output. SI is computed using 19.35 GHz, 22.235 GHz and 85.5 GHz Vertical channels and PCT is computed using 85.5 GHz Vertical and Horizontal channels. Once the training is completed, the independent data sets (which were not included in the training) were used to test the performance of the network. Instantaneous precipitation estimates with independent test data sets are validated with PR surface rain rate measurements. The results are compared with precipitation estimated using power law based (i) global algorithm and (ii) regional algorithm. Overall results show that ANN based present algorithm shows better agreement with PR rain rate. This study is aimed at developing a more accurate operational rainfall retrieval algorithm for Indo-French Megha-Tropiques Microwave Analysis and Detection of Rain and Atmospheric Structures (MADRAS) radiometer.
NASA Astrophysics Data System (ADS)
Camp, Charles H., Jr.; Lee, Young Jong; Cicerone, Marcus T.
2016-04-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download.
Large corrections to high-p_{T} hadron-hadron scattering in QCD
Ellis, R. K.; Furman, M. A.; Haber, H. E.; Hinchliffe, I.
1980-10-27
In this paper, we have computed the first non-trivial QCD corrections to the quark-quark scattering process which contributes to the production of hadrons at large p_{T} in hadron-hadron collisions. Using quark distribution functions defined in deep inelastic scattering and fragmentation functions defined in one particle inclusive e^{+}e^{-} annihilation, we find that the corrections are large. Finally, this implies that QCD perturbation theory may not be reliable for large-p_{T} haron physics.
NASA Astrophysics Data System (ADS)
Yao, B. A.; Zhang, C. S.; Sheng, C. J.; Peng, Y. L.
2005-07-01
This paper is the continuation of paper [1]. In this paper we further show that the difference between twilight flat field and night sky exposure is mainly due to the existence of scattered light. Like Grundahl and Sorensen, we also made the pinhole images of the 1.56m at the Shanghai Observatory and the 63cm of the Nanjing University to show the existence of scattered light intuitively. Both the 1.56m and the 63cm reflectors have normal designed baffles. Therefore it is the common weakness of all standard designed reflectors having only two baffles mounted in front of the primary and secondary mirrors which are not enough to protect the CCD cameras from scattered light in getting accurate flat fields. It is of great importance to modify the primary mirror baffle of all similar reflectors in order to get more accurate flat fielding.
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
Declared North Korean nuclear tests in 2006, 2009, 2013, and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-dimensional global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25% shorter than the distances between events estimated using regional Pn phases. The 2009, 2013, and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of meters. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio, and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-d velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The
Grimme, Stefan
2004-09-01
An empirical method to account for van der Waals interactions in practical calculations with the density functional theory (termed DFT-D) is tested for a wide variety of molecular complexes. As in previous schemes, the dispersive energy is described by damped interatomic potentials of the form C6R(-6). The use of pure, gradient-corrected density functionals (BLYP and PBE), together with the resolution-of-the-identity (RI) approximation for the Coulomb operator, allows very efficient computations for large systems. Opposed to previous work, extended AO basis sets of polarized TZV or QZV quality are employed, which reduces the basis set superposition error to a negligible extend. By using a global scaling factor for the atomic C6 coefficients, the functional dependence of the results could be strongly reduced. The "double counting" of correlation effects for strongly bound complexes is found to be insignificant if steep damping functions are employed. The method is applied to a total of 29 complexes of atoms and small molecules (Ne, CH4, NH3, H2O, CH3F, N2, F2, formic acid, ethene, and ethine) with each other and with benzene, to benzene, naphthalene, pyrene, and coronene dimers, the naphthalene trimer, coronene. H2O and four H-bonded and stacked DNA base pairs (AT and GC). In almost all cases, very good agreement with reliable theoretical or experimental results for binding energies and intermolecular distances is obtained. For stacked aromatic systems and the important base pairs, the DFT-D-BLYP model seems to be even superior to standard MP2 treatments that systematically overbind. The good results obtained suggest the approach as a practical tool to describe the properties of many important van der Waals systems in chemistry. Furthermore, the DFT-D data may either be used to calibrate much simpler (e.g., force-field) potentials or the optimized structures can be used as input for more accurate ab initio calculations of the interaction energies.
Ma, C.; Liescheski, P.B.; Bonham, R.A. )
1989-12-01
In this article we describe an experimental technique to measure the total electron-impact cross section by measurement of the attenuation of an electron beam passing through a gas at constant pressure with the unwanted forward scattering contribution removed. The technique is based on the different spatial propagation properties of scattered and unscattered electrons. The correction is accomplished by measuring the electron beam attenuation dependence on both the target gas pressure (number density) and transmission length. Two extended forms of the Beer--Lambert law which approximately include the contributions for forward scattering and for forward scattering plus multiple scattering from the gas outside the electron beam were developed. It is argued that the dependence of the forward scattering on the path length through the gas is approximately independent of the model used to describe it. The proposed methods were used to determine the total cross section and forward scattering contribution from argon (Ar) with 300-eV electrons. Our results are compared with those in the literature and the predictions of theory and experiment for the forward scattering and multiple scattering contributions. In addition, Monte Carlo simulations were performed as a further test of the method.
Accurate metrology of polarization curves measured at the speckle size of visible light scattering.
Ghabbach, A; Zerrad, M; Soriano, G; Amra, C
2014-06-16
An optical procedure is presented to measure at the speckle size and with high accuracy, the polarization degree of patterns scattered by disordered media. Whole mappings of polarization ratio, polarimetric phase and polarization degree are pointed out. Scattered clouds are emphasized on the Poincaré sphere, and are completed by probability density functions of the polarization degree. A special care is attributed to the accuracy of data. The set-up provides additional signatures of scattering media.
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
Scatter correction for cone-beam computed tomography using moving blocker strips
NASA Astrophysics Data System (ADS)
Wang, Jing; Mao, Weihua; Solberg, Timothy
2011-03-01
One well-recognized challenge of cone-beam computed tomography (CBCT) is the presence of scatter contamination within the projection images. Scatter degrades the CBCT image quality by decreasing the contrast, introducing shading artifacts and leading to inaccuracies in the reconstructed CT number. We propose a blocker-based approach to simultaneously estimate scatter signal and reconstruct the complete volume within the field of view (FOV) from a single CBCT scan. A physical strip attenuator (i.e., "blocker"), consists of lead strips, is inserted between the x-ray source and the patient. The blocker moves back and forth along z-axis during the gantry rotation. The two-dimensional (2D) scatter fluence is estimated by interpolating the signal from the blocked regions. A modified Feldkamp-Davis-Kress (FDK) algorithm and an iterative reconstruction based on the constraint optimization are used to reconstruct CBCT images from un-blocked projection data after the scatter signal is subtracted. An experimental study is performed to evaluate the performance of the proposed scatter correction scheme. The scatter-induced shading/cupping artifacts are substantially reduced in CBCT using the proposed strategy. In the experimental study using a CatPhan©600 phantom, CT number errors in the selected regions of interest are reduced from 256 to less than 20. The proposed method allows us to simultaneously estimate the scatter signal in projection data, reduce the imaging dose and obtain complete volumetric information within the FOV.
QCD CORRECTIONS TO DILEPTON PRODUCTION NEAR PARTONIC THRESHOLD IN PP SCATTERING.
SHIMIZU, H.; STERMAN, G.; VOGELSANG, W.; YOKOYA, H.
2005-10-02
We present a recent study of the QCD corrections to dilepton production near partonic threshold in transversely polarized {bar p}p scattering, We analyze the role of the higher-order perturbative QCD corrections in terms of the available fixed-order contributions as well as of all-order soft-gluon resummations for the kinematical regime of proposed experiments at GSI-FAIR. We find that perturbative corrections are large for both unpolarized and polarized cross sections, but that the spin asymmetries are stable. The role of the far infrared region of the momentum integral in the resummed exponent and the effect of the NNLL resummation are briefly discussed.
NASA Astrophysics Data System (ADS)
Livins, Peteris; Aton, T.; Schnatterly, S. E.
1988-09-01
Electron-energy-loss measurements for an amorphous chemical-vapor-deposited silicon nitride film and evaporated sapphire in the broad energy range 1-200 eV are investigated. A method, not requiring the zero-loss peak, to remove the multiple scattering is discussed, applied, and the optical constants obtained. An Elliot-type model used with aluminum oxide gives a valence-exciton binding energy of 1.36+/-0.2 eV with a band gap of 9.8+/-0.2 eV. The unexpected strength of the nitrogen 2s transition is noted in silicon nitride.
Constrained gamma-Z interference corrections to parity-violating electron scattering
Hall, Nathan Luke; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-07-01
We present a comprehensive analysis of gamma-Z interference corrections to the weak charge of the proton measured in parity-violating electron scattering, including a survey of existing models and a critical analysis of their uncertainties. Constraints from parton distributions in the deep-inelastic region, together with new data on parity-violating electron scattering in the resonance region, result in significantly smaller uncertainties on the corrections compared to previous estimates. At the kinematics of the Qweak experiment, we determine the gamma-Z box correction to be Re\\box_{gamma-Z}^V = (5.61 +- 0.36) x 10^{-3}. The new constraints also allow precise predictions to be made for parity-violating deep-inelastic asymmetries on the deuteron.
Band-Filling Correction Method for Accurate Adsorption Energy Calculations: A Cu/ZnO Case Study.
Hellström, Matti; Spångberg, Daniel; Hermansson, Kersti; Broqvist, Peter
2013-11-12
We present a simple method, the "band-filling correction", to calculate accurate adsorption energies (Eads) in the low coverage limit from finite-size supercell slab calculations using DFT. We show that it is necessary to use such a correction if charge transfer takes place between the adsorbate and the substrate, resulting in the substrate bands either filling up or becoming depleted. With this correction scheme, we calculate Eads of an isolated Cu atom adsorbed on the ZnO(101̅0) surface. Without the correction, the calculated Eads is highly coverage-dependent, even for surface supercells that would typically be considered very large (in the range from 1 nm × 1 nm to 2.5 nm × 2.5 nm). The correction scheme works very well for semilocal functionals, where the corrected Eads is converged within 0.01 eV for all coverages. The correction scheme also works well for hybrid functionals if a large supercell is used and the exact exchange interaction is screened. PMID:26583386
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
Surface EMG measurements during fMRI at 3T: accurate EMG recordings after artifact correction.
van Duinen, Hiske; Zijdewind, Inge; Hoogduin, Hans; Maurits, Natasha
2005-08-01
In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-scanning), we were able to compare the mean amplitude of the undisturbed EMG (non-scanning) intervals with the mean amplitude of the EMG intervals during scanning, after MRI artifact correction. The agreement between the mean amplitudes of the corrected and the undisturbed EMG was excellent and the mean difference between the two amplitudes was not significantly different. Furthermore, there was no significant difference between the corrected and undisturbed amplitude at different force levels. In conclusion, we have shown that it is feasible to record surface EMG during scanning and that, after MRI artifact correction, the EMG recordings can be used to quantify isometric muscle activity, even at very low activation intensities.
Two-photon exchange corrections in elastic lepton-proton scattering at small momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
In recent years, elastic electron-proton scattering experiments, with and without polarized protons, gave strikingly different results for the electric over magnetic proton form factor ratio. A mysterious discrepancy (``the proton radius puzzle'') has been observed in the measurement of the proton charge radius in muon spectroscopy experiments versus electron spectroscopy and electron scattering. Two-photon exchange (TPE) contributions are the largest source of the hadronic uncertainty in these experiments. We compare the existing models of the elastic contribution to TPE correction in lepton-proton scattering. A subtracted dispersion relation formalism for the TPE in electron-proton scattering has been developed and tested. Its relative effect on cross section is in the 1 - 2 % range for a low value of the momentum transfer. An alternative dispersive evaluation of the TPE correction to the hydrogen hyperfine splitting was found and applied. For the inelastic TPE contribution, the low momentum transfer expansion was studied. In addition with the elastic TPE it describes the experimental TPE fit to electron data quite well. For a forthcoming muon-proton scattering experiment (MUSE) the resulting TPE was found to be in the 0 . 5 - 1 % range, which is the planned accuracy goal.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
A simple scatter correction method for dual energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Lau, Beverly; Hu, Yue-Houng; Zhao, Wei; Gindi, Gene
2014-03-01
Dual-Energy Contrast Enhanced Digital Breast Tomosynthesis (DE-CE-DBT) has the potential to deliver diagnostic information for vascularized breast pathology beyond that available from screening DBT. DE-CE-DBT involves a contrast (iodine) injection followed by a low energy (LE) and a high energy (HE) acquisitions. These undergo weighted subtraction then a reconstruction that ideally shows only the iodinated signal. Scatter in the projection data leads to "cupping" artifacts that can reduce the visibility and quantitative accuracy of the iodinated signal. The use of filtered backprojection (FBP) reconstruction ameliorates these types of artifacts, but the use of FBP precludes the advantages of iterative reconstructions. This motivates an effective and clinically practical scatter correction (SC) method for the projection data. We propose a simple SC method, applied at each acquisition angle. It uses scatter-only data at the edge of the image to interpolate a scatter estimate within the breast region. The interpolation has an approximately correct spatial profile but is quantitatively inaccurate. We further correct the interpolated scatter data with the aid of easily obtainable knowledge of SPR (scatter-to-primary ratio) at a single reference point. We validated the SC method using a CIRS breast phantom with iodine inserts. We evaluated its efficacy in terms of SDNR and iodine quantitative accuracy. We also applied our SC method to a patient DE-CE-DBT study and showed that the SC allowed detection of a previously confirmed tumor at the edge of the breast. The SC method is quick to use and may be useful in a clinical setting.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Germer, Thomas A
2016-09-01
We consider the effect of volume diffusion on measurements of the bidirectional scattering distribution function when a finite distance is used for the solid angle defining aperture. We derive expressions for correction factors that can be used when the reduced scattering coefficients and the index of refraction are known. When these quantities are not known, the expressions can be used to guide the assessment of measurement uncertainty. We find that some measurement geometries reduce the effect of volume diffusion compared to their reciprocal geometries. PMID:27607273
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
NASA Astrophysics Data System (ADS)
Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick
2002-08-01
This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.
Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R
2008-12-01
Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
QED radiative corrections to low-energy Møller and Bhabha scattering
NASA Astrophysics Data System (ADS)
Epstein, Charles S.; Milner, Richard G.
2016-08-01
We present a treatment of the next-to-leading-order radiative corrections to unpolarized Møller and Bhabha scattering without resorting to ultrarelativistic approximations. We extend existing soft-photon radiative corrections with new hard-photon bremsstrahlung calculations so that the effect of photon emission is taken into account for any photon energy. This formulation is intended for application in the OLYMPUS experiment and the upcoming DarkLight experiment but is applicable to a broad range of experiments at energies where QED is a sufficient description.
Electromagnetic radiative corrections in parity-violating electron-proton scattering
Arvieux, Francois; Collin, B.; Guler, Hayko; Morlet, Marcel; Niccolai, Silvia; Ong, S.; Van de Wiele, Jacques
2006-01-01
QED radiative corrections have been calculated for leptonic and hadronic variables in parity-violating elastic ep scattering. For the first time, the calculation of the asymmetry in the elastic radiative tail is performed without the peaking-approximation assumption in hadronic variables configuration. A comparison with the PV-A4 data validates our approach. This method has been also used to evaluate the radiative corrections to the parity-violating asymmetry measured in the G0 experiment. The results obtained are here presented.
Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice
2015-06-01
Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility
NASA Astrophysics Data System (ADS)
Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu
2013-11-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.
Development of Filtered Rayleigh Scattering for Accurate Measurement of Gas Velocity
NASA Technical Reports Server (NTRS)
Miles, Richard B.; Lempert, Walter R.
1995-01-01
The overall goals of this research were to develop new diagnostic tools capable of capturing unsteady and/or time-evolving, high-speed flow phenomena. The program centers around the development of Filtered Rayleigh Scattering (FRS) for velocity, temperature, and density measurement, and the construction of narrow linewidth laser sources which will be capable of producing an order MHz repetition rate 'burst' of high power pulses.
Truncation correction for VOI C-arm CT using scattered radiation
NASA Astrophysics Data System (ADS)
Bier, Bastian; Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Xia, Yan; Struffert, Tobias; Hornegger, Joachim
2013-03-01
In C-arm computed tomography, patient dose reduction by volume-of-interest (VOI) imaging is of increasing interest for many clinical applications. A remaining limitation of VOI imaging is the truncation artifact when reconstructing a 3D volume. It can either be cupping towards the boundaries of the field-of-view (FOV) or an incorrect offset in the Hounsfield values of the reconstructed voxels. In this paper, we present a new method for correction of truncation artifacts in a collimated scan. When axial or lateral collimation are applied, scattered radiation still reaches the detector and is recorded outside of the FOV. If the full area of the detector is read out we can use this scattered signal to estimate the truncated part of the object. We apply three processing steps: detection of the collimator edge, adjustment of the area outside the FOV, and interpolation of the collimator edge. Compared to heuristic truncation correction methods we were able to reconstruct high contrast structures like bones outside of the FOV. Inside the FOV we achieved similar reconstruction results as with water cylinder truncation correction. These preliminary results indicate that scattered radiation outside the FOV can be used to improve image quality and further research in this direction seems beneficial.
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Dzuba, V. A.; Flambaum, V. V.; Pospelov, M.; Stadnik, Y. V.
2016-06-01
We revisit the WIMP-type dark matter scattering on electrons that results in atomic ionization and can manifest itself in a variety of existing direct-detection experiments. Unlike the WIMP-nucleon scattering, where current experiments probe typical interaction strengths much smaller than the Fermi constant, the scattering on electrons requires a much stronger interaction to be detectable, which in turn requires new light force carriers. We account for such new forces explicitly, by introducing a mediator particle with scalar or vector couplings to dark matter and to electrons. We then perform state-of-the-art numerical calculations of atomic ionization relevant to the existing experiments. Our goals are to consistently take into account the atomic physics aspect of the problem (e.g., the relativistic effects, which can be quite significant) and to scan the parameter space—the dark matter mass, the mediator mass, and the effective coupling strength—to see if there is any part of the parameter space that could potentially explain the DAMA modulation signal. While we find that the modulation fraction of all events with energy deposition above 2 keV in NaI can be quite significant, reaching ˜50 %, the relevant parts of the parameter space are excluded by the XENON10 and XENON100 experiments.
Compartment modeling of dynamic brain PET—The impact of scatter corrections on parameter errors
Häggström, Ida Karlsson, Mikael; Larsson, Anne; Schmidtlein, C. Ross
2014-11-01
Purpose: The aim of this study was to investigate the effect of scatter and its correction on kinetic parameters in dynamic brain positron emission tomography (PET) tumor imaging. The 2-tissue compartment model was used, and two different reconstruction methods and two scatter correction (SC) schemes were investigated. Methods: The GATE Monte Carlo (MC) software was used to perform 2 × 15 full PET scan simulations of a voxelized head phantom with inserted tumor regions. The two sets of kinetic parameters of all tissues were chosen to represent the 2-tissue compartment model for the tracer 3′-deoxy-3′-({sup 18}F)fluorothymidine (FLT), and were denoted FLT{sub 1} and FLT{sub 2}. PET data were reconstructed with both 3D filtered back-projection with reprojection (3DRP) and 3D ordered-subset expectation maximization (OSEM). Images including true coincidences with attenuation correction (AC) and true+scattered coincidences with AC and with and without one of two applied SC schemes were reconstructed. Kinetic parameters were estimated by weighted nonlinear least squares fitting of image derived time–activity curves. Calculated parameters were compared to the true input to the MC simulations. Results: The relative parameter biases for scatter-eliminated data were 15%, 16%, 4%, 30%, 9%, and 7% (FLT{sub 1}) and 13%, 6%, 1%, 46%, 12%, and 8% (FLT{sub 2}) for K{sub 1}, k{sub 2}, k{sub 3}, k{sub 4}, V{sub a}, and K{sub i}, respectively. As expected, SC was essential for most parameters since omitting it increased biases by 10 percentage points on average. SC was not found necessary for the estimation of K{sub i} and k{sub 3}, however. There was no significant difference in parameter biases between the two investigated SC schemes or from parameter biases from scatter-eliminated PET data. Furthermore, neither 3DRP nor OSEM yielded the smallest parameter biases consistently although there was a slight favor for 3DRP which produced less biased k{sub 3} and K{sub i
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2012-03-01
X-ray scatter is a common cause of image artifacts for cone-beam CT systems due to the expanded field of view and degrades the quantitative accuracy of measured Hounsfield Units (HU). Due to the strong dependency of scatter on the object being scanned, it is crucial to measure the scatter signal for each object. We propose to use a beam pass array (BPA) composed of parallel-holes within a tungsten plate to measure scatter for a dedicated breast CT system. A complete study of the performance of the BPA was conducted. The goal of this study was to explore the feasibility of measuring and compensating for the scatter signal for each individual object. Different clinical study schemes were investigated, including a full rotation scan with BPA and discrete projections acquired with BPA followed by interpolation for full rotation. Different sized cylindrical phantoms and a breast shaped polyethylene phantom were used to test for the robustness of the proposed method. Physically measured scatter signals were converted into scatter to primary ratios (SPRs) at discrete locations through the projection image. A complete noise-free 2D SPR was generated from these discrete measurements. SPR results were compared to Monte Carlo simulation results and scatter corrected CT images were quantitatively evaluated for "cupping" artifact. With the proposed method, a reduction of up to 47 HU of "cupping" was demonstrated. In conclusion, the proposed BPA method demonstrated effective and accurate objectspecific scatter correction with the main advantage of dose-sparing compared to beam stop array (BSA) approaches.
X-ray scatter correction method for dedicated breast computed tomography
Sechopoulos, Ioannis
2012-05-15
Purpose: To improve image quality and accuracy in dedicated breast computed tomography (BCT) by removing the x-ray scatter signal included in the BCT projections. Methods: The previously characterized magnitude and distribution of x-ray scatter in BCT results in both cupping artifacts and reduction of contrast and accuracy in the reconstructions. In this study, an image processing method is proposed that estimates and subtracts the low-frequency x-ray scatter signal included in each BCT projection postacquisition and prereconstruction. The estimation of this signal is performed using simple additional hardware, one additional BCT projection acquisition with negligible radiation dose, and simple image processing software algorithms. The high frequency quantum noise due to the scatter signal is reduced using a noise filter postreconstruction. The dosimetric consequences and validity of the assumptions of this algorithm were determined using Monte Carlo simulations. The feasibility of this method was determined by imaging a breast phantom on a BCT clinical prototype and comparing the corrected reconstructions to the unprocessed reconstructions and to reconstructions obtained from fan-beam acquisitions as a reference standard. One-dimensional profiles of the reconstructions and objective image quality metrics were used to determine the impact of the algorithm. Results: The proposed additional acquisition results in negligible additional radiation dose to the imaged breast ({approx}0.4% of the standard BCT acquisition). The processed phantom reconstruction showed substantially reduced cupping artifacts, increased contrast between adipose and glandular tissue equivalents, higher voxel value accuracy, and no discernible blurring of high frequency features. Conclusions: The proposed scatter correction method for dedicated breast CT is feasible and can result in highly improved image quality. Further optimization and testing, especially with patient images, is necessary to
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
More accurate X-ray scattering data of deeply supercooled bulk liquid water
Neuefeind, Joerg C; Benmore, Chris J; Weber, Richard; Paschek, Dietmar
2011-01-01
Deeply supercooled water droplets held container-less in an acoustic levitator are investigated with high energy X-ray scattering. The temperature dependence X-ray structure function is found to be non-linear. Comparison with two popular computer models reveals that structural changes are predicted too abrupt by the TIP5P model, while the rate of change predicted by TIP4P is in much better agreement with experiment. The abrupt structural changes predicted by the TIP5P model to occur in the temperature range between 260-240K as water approaches the homogeneous nucleation limit are unrealistic. Both models underestimate the distance between neighbouring oxygen atoms and overestimate the sharpness of the OO distance distribution, indicating that the strength of the H-bond is overestimated in these models.
Correction of radiation absorption on biological samples using Rayleigh to Compton scattering ratio
NASA Astrophysics Data System (ADS)
Pereira, Marcelo O.; Conti, Claudio de Carvalho; dos Anjos, Marcelino J.; Lopes, Ricardo T.
2012-06-01
The aim of this work was to develop a method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E < 30 keV) applied to a biological matrix based on the Rayleigh to Compton scattering ratio and the effective atomic number. For calibration, scattering measurements were performed on standard samples of radiation produced by a gamma-ray source of 241Am (59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 1557B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C), in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples.
Wide angle Compton scattering on the proton: study of power suppressed corrections
NASA Astrophysics Data System (ADS)
Kivel, N.; Vanderhaeghen, M.
2015-10-01
We study the wide angle Compton scattering process on a proton within the soft-collinear factorization (SCET) framework. The main purpose of this work is to estimate the effect due to certain power suppressed corrections. We consider all possible kinematical power corrections and also include the subleading amplitudes describing the scattering with nucleon helicity flip. Under certain assumptions we present a leading-order factorization formula for these amplitudes which includes the hard- and soft-spectator contributions. We apply the formalism and perform a phenomenological analysis of the cross section and asymmetries in the wide angle Compton scattering on a proton. We assume that in the relevant kinematical region where -t,-u>2.5 GeV2 the dominant contribution is provided by the soft-spectator mechanism. The hard coefficient functions of the corresponding SCET operators are taken in the leading-order approximation. The analysis of existing cross section data shows that the contribution of the helicity-flip amplitudes to this observable is quite small and comparable with other expected theoretical uncertainties. We also show predictions for double polarization observables for which experimental information exists.
Wen, C; Smith, David J
2016-10-01
Aberration-corrected transmission electron microscope images taken under optimum-defocus conditions or processed offline can correctly reflect the projected crystal structure with atomic resolution. However, dynamical scattering, which will seriously influence image contrast, is still unavoidable. Here, the multislice image simulation approach was used to quantify the impact of dynamical scattering on the contrast of aberration-corrected images for a 3C-SiC specimen with changes in atomic occupancy and thickness. Optimum-defocus images with different spherical aberration (CS) coefficients, and structure images restored by deconvolution processing, were studied. The results show that atomic-column positions and the atomic occupancy for SiC 'dumbbells' can be determined by analysis of image contrast profiles only below a certain thickness limit. This limit is larger for optimum-defocus and restored structure images with negative CS coefficient than those with positive CS coefficient. The image contrast of C (or Si) atomic columns with specific atomic occupancy changes differently with increasing crystal thickness. Furthermore, contrast peaks for C atomic columns overlapping with neighboring peaks of Si atomic columns with varied Si atomic occupancy, which is enhanced with increasing crystal thickness, can be neglected in restored structure images, but the effect is substantial in optimum-defocus images.
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
Implementation of an analytical Raman scattering correction for satellite ocean-color processing.
McKinna, Lachlan I W; Werdell, P Jeremy; Proctor, Christopher W
2016-07-11
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a time-series study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
Implementation of an analytical Raman scattering correction for satellite ocean-color processing.
McKinna, Lachlan I W; Werdell, P Jeremy; Proctor, Christopher W
2016-07-11
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a time-series study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs. PMID:27410899
Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.; Valtonen, E.
2015-06-10
To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energetic protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q^{2}. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; Thomas, Anthony W.
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Qweak experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
Gakh, G. I.; Konchatnij, M. I. Merenkov, N. P.
2012-08-15
The model-independent QED radiative corrections to polarization observables in elastic scattering of unpolarized and longitudinally polarized electron beams by a deuteron target are calculated in leptonic variables. The experimental setup when the deuteron target is arbitrarily polarized is considered and the procedure for applying the derived results to the vector or tensor polarization of the recoil deuteron is discussed. The calculation is based on taking all essential Feynman diagrams into account, which results in the form of the Drell-Yan representation for the cross section, and the use of the covariant parameterization of the deuteron polarization state. Numerical estimates of the radiative corrections are given in the case where event selection allows undetected particles (photons and electron-positron pairs) and the restriction on the lost invariant mass is used.
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-01-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR) - a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al., 2006) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is easily transportable
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-08-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)—a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
NASA Astrophysics Data System (ADS)
Calame, C. Carloni; Czyż, H.; Gluza, J.; Gunia, M.; Montagna, G.; Nicrosini, O.; Piccinini, F.; Riemann, T.; Worek, M.
2011-07-01
Virtual fermionic N f = 1 and N f = 2 contributions to Bhabha scattering are combined with realistic real corrections at next-to-next-to-leading order in QED. The virtual corrections are determined by the package bha_nnlo_hf, and real corrections with the Monte Carlo generators B hagen-1P h, H elac-P hegas and E khara. Numerical results are discussed at the energies of and with realistic cuts used at the Φ factory DANE, at the B factories PEP-II and KEK, and at the charm/τ factory BEPC II. We compare these complete calculations with the approximate ones realized in the generator B abaY aga@NLO used at meson factories to evaluate their luminosities. For realistic reference event selections we find agreement for the NNLO leptonic and hadronic corrections within 0.07% or better and conclude that they are well accounted for in the generator by comparison with the present experimental accuracy.
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Kangasmaa, Tuija; Kuikka, Jyrki; Sohlberg, Antti
2012-01-01
Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM-) based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC) simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 10(5) simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 10(6) simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
NASA Astrophysics Data System (ADS)
Ramamurthy, Senthil; D'Orsi, Carl J.; Sechopoulos, Ioannis
2016-02-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated.
Ramamurthy, Senthil; D'Orsi, Carl J; Sechopoulos, Ioannis
2016-02-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. PMID:26760295
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Noncommutative correction to Aharonov-Bohm scattering: A field theory approach
Anacleto, M.A.; Gomes, M.; Silva, A.J. da; Spehler, D.
2004-10-15
We study a noncommutative nonrelativistic theory in 2+1 dimensions of a scalar field coupled to the Chern-Simons field. In the commutative situation this model has been used to simulate the Aharonov-Bohm effect in the field theory context. We verified that, contrary to the commutative result, the inclusion of a quartic self-interaction of the scalar field is not necessary to secure the ultraviolet renormalizability of the model. However, to obtain a smooth commutative limit the presence of a quartic gauge invariant self-interaction is required. For small noncommutativity we fix the corrections to the Aharonov-Bohm scattering and prove that up to one loop the model is free from dangerous infrared/ultraviolet divergences.
Self-interaction correction in multiple scattering theory: application to transition metal oxides
Daene, Markus W; Lueders, Martin; Ernst, Arthur; Diemo, Koedderitzsch; Temmerman, Walter M; Szotek, Zdzislawa; Wolfam, Hergert
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare the CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.
NASA Astrophysics Data System (ADS)
Ryu, Y.; Kobayashi, H.; Welles, J.; Norman, J.
2011-12-01
Correct estimation of gap fraction is essential to quantify canopy architectural variables such as leaf area index and clumping index, which mainly control land-atmosphere interactions. However, gap fraction measurements from optical sensors are contaminated by scattered radiation by canopy and ground surface. In this study, we propose a simple invertible bidirectional transmission model to remove scattering effects from gap fraction measurements. The model shows that 1) scattering factor appears highest where leaf area index is 1-2 in non-clumped canopy, 2) relative scattering factor (scattering factor/measured gap fraction) increases with leaf area index, 3) bright land surface (e.g. snow and bright soil) can contribute a significant scattering factor, 4) the scattering factor is not marginal even in highly diffused sky condition. By incorporating the model with LAI2200 data collected in an open savanna ecosystem, we find that the scattering factor causes significant underestimation of leaf area index (25%) and significant overestimation of clumping index (6 %). The results highlight that some LAI-2000-based LAI estimates from around the world may be underestimated, particularly in highly clumped broad-leaf canopies. Fortunately, the importance of scattering could be assessed with software from LICOR, Inc., which will incorporate the scattering model from this study in a post processing mode after data has been collected by a LAI-2000 or LAI-2200.
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMA phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter
NASA Astrophysics Data System (ADS)
Chen, J.; Zebker, H. A.; Knight, R. J.
2015-12-01
InSAR is commonly used to measure surface deformation between different radar passes at cm-scale accuracy and m-scale resolution. However, InSAR measurements are often decorrelated due to vegetation growth, which greatly limits high quality InSAR data coverage. Here we present an algorithm for retrieving InSAR deformation measurements over areas with significant vegetation decorrelation through the use of adaptive interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. The interpolation filter restores phase continuity in space and greatly reduces errors in phase unwrapping. We apply this algorithm to process L-band ALOS interferograms acquired over the San Luis Valley, Colorado and the Tulare Basin, California. In both areas, groundwater extraction for irrigation results in land deformation that can be detected using InSAR. We show that the PS-based algorithm reduces the artifacts from vegetation decorrelation while preserving the deformation signature. The spatial sampling resolution achieved over agricultural fields is on the order of hundreds of meters, usually sufficient for groundwater studies. The improved InSAR data allow us further to reconstruct the SBAS ground deformation time series and transform the measured deformation to head levels using the skeletal storage coefficient and time delay constant inferred from a joint InSAR-well data analysis. The resulting InSAR-head and well-head measurements in the San Luis valley show good agreement with primary confined aquifer pumping activities. This case study demonstrates that high quality InSAR deformation data can be obtained over vegetation-decorrrelated region if processed correctly.
NASA Astrophysics Data System (ADS)
Wang, Chao; Xiao, Jun; Luo, Xiaobing
2016-10-01
The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-15
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Darboux, Michel; Bordy, Thomas; Robert-Coutant, Christine; Gonon, Georges
2004-05-01
At CEA-LETI, a DEXA approach for systems using a digital 2D radiographic detector has been developed. It relies on an original X-rays scatter management method, based on a combined use of an analytical model and of scatter calibration data acquired through different thicknesses of Lucite slabs. Since Lucite X-rays interaction properties are equivalent to fat, the approach leads to a scatter flux map representative of a 100% fat region. However, patients" soft tissues are composed of lean and fat. Therefore, the obtained scatter map has to be refined in order to take into account the various fat ratios that can present patients. This refinement consists in establishing a formula relating the fat ratio to the thicknesses of Low and High Energy Lucite slabs leading to same signal level. This proportion is then used to compute, on the basis of X-rays/matter interaction equations, correction factors to apply to Lucite equivalent X-rays scatter map. Influence of fat ratio correction has been evaluated, on a digital 2D bone densitometer, with phantoms composed of a PVC step (simulating bone) and different Lucite/water thicknesses as well as on patients. The results show that our X-rays scatter determination approach can take into account variations of body composition.
NASA Astrophysics Data System (ADS)
Rinkel, Jean; Gerfault, Laurent; Estève, François; Dinten, Jean-Marc
2006-03-01
Cone beam Computed Tomography (CBCT) enables three-dimensional imaging with isotropic resolution. X-rays scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artifacts, streaks, and quantification inaccuracies. In this paper, an original scatter management process on tomographic projections without supplementary on-line acquisitions is presented. The correction method is based on scatter calibration through off-line acquisitions, combined to an on-line analytical transformation issued from physical equations to adapt calibration to the observed object. Evaluations of the method were performed on an anthropomorphic thorax phantom. First, tomographic acquisitions were performed with a flat panel detector. Reconstructed volume obtained with the proposed scatter correction method has been compared with the one obtained through a classical beam stops method. Secondly, reconstructed volume has been compared with the one obtained through a fan beam system (Philips multi slice CT scanner). The new method provided results in good agreement with the beam stops approach and with the multi slice CT scanner, suppressing cupping artifacts and improving quantification significantly. Compared to the beam stops method, lower X-rays doses (divided by a factor 9) and shorter acquisition times were needed.
Sun, Fang; Ella-Menye, Jean-Rene; Galvan, Daniel David; Bai, Tao; Hung, Hsiang-Chieh; Chou, Ying-Nien; Zhang, Peng; Jiang, Shaoyi; Yu, Qiuming
2015-03-24
Reliable surface-enhanced Raman scattering (SERS) based biosensing in complex media is impeded by nonspecific protein adsorptions. Because of the near-field effect of SERS, it is challenging to modify SERS-active substrates using conventional nonfouling materials without introducing interference from their SERS signals. Herein, we report a stealth surface modification strategy for sensitive, specific and accurate detection of fructose in protein solutions using SERS by forming a mixed self-assembled monolayer (SAM). The SAM consists of a short zwitterionic thiol, N,N-dimethyl-cysteamine-carboxybetaine (CBT), and a fructose probe 4-mercaptophenylboronic acid (4-MPBA). The specifically designed and synthesized CBT not only resists protein fouling effectively, but also has very weak Raman activity compared to 4-MPBA. Thus, the CBT SAM provides a stealth surface modification to SERS-active substrates. The surface compositions of mixed SAMs were investigated using X-ray photoelectron spectroscopy (XPS) and SERS, and their nonfouling properties were studied with a surface plasmon resonance (SPR) biosensor. The mixed SAM with a surface composition of 94% CBT demonstrated a very low bovine serum albumin (BSA) adsorption (∼3 ng/cm(2)), and moreover, only the 4-MPBA signal appeared in the SERS spectrum. With the use of this surface-modified SERS-active substrate, quantification of fructose over clinically relevant concentrations (0.01-1 mM) was achieved. Partial least-squares regression (PLS) analysis showed that the detection sensitivity and accuracy were maintained for the measurements in 1 mg/mL BSA solutions. This stealth surface modification strategy provides a novel route to introduce nonfouling property to SERS-active substrates for SERS biosensing in complex media.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Modulator design for x-ray scatter correction using primary modulation: Material selection
Gao, Hewei; Zhu, Lei; Fahrig, Rebecca
2010-01-01
Purpose: An optimal material selection for primary modulator is proposed in order to minimize beam hardening of the modulator in x-ray cone-beam computed tomography (CBCT). Recently, a measurement-based scatter correction method using primary modulation has been developed and experimentally verified. In the practical implementation, beam hardening of the modulator blocker is a limiting factor because it causes inconsistency in the primary signal and therefore degrades the accuracy of scatter correction. Methods: This inconsistency can be purposely assigned to the effective transmission factor of the modulator whose variation as a function of object filtration represents the magnitude of beam hardening of the modulator. In this work, the authors show that the variation reaches a minimum when the K-edge of the modulator material is near the mean energy of the system spectrum. Accordingly, an optimal material selection can be carried out in three steps. First, estimate and evaluate the polychromatic spectrum for a given x-ray system including both source and detector; second, calculate the mean energy of the spectrum and decide the candidate materials whose K-edge energies are near the mean energy; third, select the optimal material from the candidates after considering both the magnitude of beam hardening and the physical and chemical properties. Results: A tabletop x-ray CBCT system operated at 120 kVp is used to validate the material selection method in both simulations and experiments, from which the optimal material for this x-ray system is then chosen. With the transmission factor initially being 0.905 and 0.818, simulations show that erbium provides the least amount of variation as a function of object filtrations (maximum variations are 2.2% and 4.3%, respectively, only one-third of that for copper). With different combinations of aluminum and copper filtrations (simulating a range of object thicknesses), measured overall variations are 2.5%, 1.0%, and 8
Modulator design for x-ray scatter correction using primary modulation: Material selection
Gao Hewei; Zhu Lei; Fahrig, Rebecca
2010-08-15
Purpose: An optimal material selection for primary modulator is proposed in order to minimize beam hardening of the modulator in x-ray cone-beam computed tomography (CBCT). Recently, a measurement-based scatter correction method using primary modulation has been developed and experimentally verified. In the practical implementation, beam hardening of the modulator blocker is a limiting factor because it causes inconsistency in the primary signal and therefore degrades the accuracy of scatter correction. Methods: This inconsistency can be purposely assigned to the effective transmission factor of the modulator whose variation as a function of object filtration represents the magnitude of beam hardening of the modulator. In this work, the authors show that the variation reaches a minimum when the K-edge of the modulator material is near the mean energy of the system spectrum. Accordingly, an optimal material selection can be carried out in three steps. First, estimate and evaluate the polychromatic spectrum for a given x-ray system including both source and detector; second, calculate the mean energy of the spectrum and decide the candidate materials whose K-edge energies are near the mean energy; third, select the optimal material from the candidates after considering both the magnitude of beam hardening and the physical and chemical properties. Results: A tabletop x-ray CBCT system operated at 120 kVp is used to validate the material selection method in both simulations and experiments, from which the optimal material for this x-ray system is then chosen. With the transmission factor initially being 0.905 and 0.818, simulations show that erbium provides the least amount of variation as a function of object filtrations (maximum variations are 2.2% and 4.3%, respectively, only one-third of that for copper). With different combinations of aluminum and copper filtrations (simulating a range of object thicknesses), measured overall variations are 2.5%, 1.0%, and 8
2015-11-01
In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278
Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies
NASA Astrophysics Data System (ADS)
Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing
2016-03-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.
Chen, Zeng-Ping; Morris, Julian; Martin, Elaine
2006-11-15
When analyzing complex mixtures that exhibit sample-to-sample variability using spectroscopic instrumentation, the variation in the optical path length, resulting from the physical variations inherent within the individual samples, will result in significant multiplicative light scattering perturbations. Although a number of algorithms have been proposed to address the effect of multiplicative light scattering, each has associated with it a number of underlying assumptions, which necessitates additional information relating to the spectra being attained. This information is difficult to obtain in practice and frequently is not available. Thus, with a view to removing the need for the attainment of additional information, a new algorithm, optical path-length estimation and correction (OPLEC), is proposed. The methodology is applied to two near-infrared transmittance spectral data sets (powder mixture data and wheat kernel data), and the results are compared with the extended multiplicative signal correction (EMSC) and extended inverted signal correction (EISC) algorithms. Within the study, it is concluded that the EMSC algorithm cannot be applied to the wheat kernel data set due to core information for the implementation of the algorithm not being available, while the analysis of the powder mixture data using EISC resulted in incorrect conclusions being drawn and hence a calibration model whose performance was unacceptable. In contrast, OPLEC was observed to effectively mitigate the detrimental effects of physical light scattering and significantly improve the prediction accuracy of the calibration models for the two spectral data sets investigated without any additional information pertaining to the calibration samples being required.
NASA Astrophysics Data System (ADS)
Liu, Miao; Yin, Shibin; Yang, Shourui; Zhang, Zonghua
2015-10-01
Digital projector is frequently applied to generate fringe pattern in phase calculation-based three dimensional (3D) imaging systems. Digital projector often works with camera in this kind of systems so the intensity response of a projector should be linear in order to ensure the measurement precision especially in Phase-Measuring Profilometry (PMP). Some correction methods are often applied to cope with the non-linear intensity response of the digital projector. These methods usually rely on camera and gamma function is often applied to compensate the non-linear response so the correction performance is restricted by the dynamic range of camera. In addition, the gamma function is not suitable to compensate the nonmonotonicity intensity response. This paper propose a gamma correction method by the precisely detecting the optical energy instead of using a plate and camera. A photodiode with high dynamic range and linear response is used to directly capture the light optical from the digital projector. After obtaining the real gamma curve precisely by photodiode, a gray level look-up table (LUT) is generated to correct the image to be projected. Finally, this proposed method is verified experimentally.
Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science
2007-06-01
A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.
NASA Astrophysics Data System (ADS)
Yuan, Hong-Lin; Gao, Shan; Zong, Chun-Lei; Dai, Meng-Ning
2009-11-01
In this study, we employ a sectional power-law (SPL) correction that provides accurate and precise measurements of 176Lu/ 175Lu ratios in geological samples using multiple collector-inductively coupled plasma-mass spectrometry (MC-ICP-MS). Three independent power laws were adopted based on the 176Lu/ 176Yb ratios of samples measured after chemical chromatography. Using isotope dilution (ID) techniques and the SPL correction method, the measured lutetium contents of United States Geological Survey rock standards (BHVO-1, BHVO-2, BCR-2, AGV-1, and G-2) agree well with the recommended values. Results obtained by conventional ICP-MS and INAA are generally higher than those obtained by ID-TIMS and ID-MC-ICP-MS; this discrepancy probably reflects oxide interference and inaccurate corrections.
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
Ramamurthy, S; Sechopoulos, I
2014-06-15
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-14
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.
Nguyen, Hung T; Pabit, Suzette A; Meisburger, Steve P; Pollack, Lois; Case, David A
2014-12-14
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb(+) and Sr(2+)) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
NASA Technical Reports Server (NTRS)
Green, Sheldon; Boissoles, J.; Boulet, C.
1988-01-01
The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.
NASA Astrophysics Data System (ADS)
Liu, Zhangweiyi; Wang, Xiaocheng; Sun, Dongning; Dong, Yi; Hu, Weisheng
2015-08-01
We have demonstrated an optical generation of highly stable millimeter-wave signal distribution system, which transfers a 300GHz signal to two remote ends over different optical fiber links for signal stability comparison. The transmission delay variations of each fiber link caused by temperature and mechanical perturbations are compensated by high-precise phase-correction system. The residual phase noise between two remote end signals is detected by dual-heterodyne phase error transfer and reaches -46dBc/Hz at 1 Hz frequency offset from the carrier. The relative instability is 8×10-17 at 1000s averaging time.
NASA Astrophysics Data System (ADS)
Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander
2006-10-01
The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.
Fortmann, Carsten; Wierling, August; Roepke, Gerd
2010-02-15
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r{sub s}. These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
Hagen, Nils T
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
NASA Astrophysics Data System (ADS)
Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.
2015-03-01
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
NASA Astrophysics Data System (ADS)
Mann, Steve D.; Tornai, Martin P.
2015-03-01
Solid state Cadmium Zinc Telluride (CZT) gamma cameras for SPECT imaging offer significantly improved energy resolution compared to traditional scintillation detectors. However, the photopeak resolution is often asymmetric due to incomplete charge collection within the detector, resulting in many photopeak events incorrectly sorted into lower energy bins ("tailing"). These misplaced events contaminate the true scatter signal, which may negatively impact scatter correction methods that rely on estimates of scatter from the spectra. Additionally, because CZT detectors are organized into arrays, each individual detector element may exhibit different degrees of tailing. Here, we present a modified dualenergy window scatter correction method for emission detection and imaging that attempts to account for positiondependent effects of incomplete charge collection in the CZT gamma camera of our dedicated breast SPECT-CT system. Point source measurements and geometric phantoms were used to estimate the impact of tailing on the scatter signal and extract a better estimate of the ratio of scatter within two energy windows. To evaluate the method, cylindrical phantoms with and without a separate fillable chamber were scanned to determine the impact on quantification in hot, cold, and uniform background regions. Projections were reconstructed using OSEM, and the results for the traditional and modified scatter correction methods were compared. Results show that while modest reduced quantification accuracy was observed in hot and cold regions of the multi-chamber phantoms, the modified scatter correction method yields up to 8% improved quantification accuracy with 4% less added noise than the traditional DEW method within uniform background regions.
NASA Astrophysics Data System (ADS)
2012-09-01
The feature article "Material advantage?" on the effects of technology and rule changes on sporting performance (July pp28-30) stated that sprinters are less affected by lower oxygen levels at high altitudes because they run "aerobically". They run anaerobically. The feature about the search for the Higgs boson (August pp22-26) incorrectly gave the boson's mass as roughly 125 MeV it is 125 GeV, as correctly stated elsewhere in the issue. The article also gave a wrong value for the intended collision energy of the Superconducting Super Collider, which was designed to collide protons with a total energy of 40 TeV.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Notaros, B. M.; Bringi, V. N.; Kleinkort, C.; Huang, G. J.; Kennedy, P.; Thurai, M.
2015-12-01
We present a novel approach to remote sensing and characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced in-situ instrumentation for microphysical and geometrical measurements of ice and snow particles, image processing methodology to reconstruct complex particle three-dimensional (3D) shapes, computational electromagnetics to analyze realistic precipitation scattering, and state-of-the-art polarimetric radar. Our in-situ measurement site at the Easton Valley View Airport, La Salle, Colorado, shown in the figure, consists of two advanced optical imaging disdrometers within a 2/3-scaled double fence intercomparison reference wind shield, and also includes PLUVIO snow measuring gauge, VAISALA weather station, and collocated NCAR GPS advanced upper-air system sounding system. Our primary radar is the CSU-CHILL radar, with a dual-offset Gregorian antenna featuring very high polarization purity and excellent side-lobe performance in any plane, and the in-situ instrumentation site being very conveniently located at a range of 12.92 km from the radar. A multi-angle snowflake camera (MASC) is used to capture multiple different high-resolution views of an ice particle in free-fall, along with its fall speed. We apply a visual hull geometrical method for reconstruction of 3D shapes of particles based on the images collected by the MASC, and convert these shapes into models for computational electromagnetic scattering analysis, using a higher order method of moments. A two-dimensional video disdrometer (2DVD), collocated with the MASC, provides 2D contours of a hydrometeor, along with the fall speed and other important parameters. We use the fall speed from the MASC and the 2DVD, along with state parameters measured at the Easton site, to estimate the particle mass (Böhm's method), and then the dielectric constant of particles, based on a Maxwell-Garnet formula. By calculation of the "particle-by-particle" scattering
NASA Astrophysics Data System (ADS)
Fujii, Masafumi
2014-03-01
It is shown that Mie's solution to Maxwell's equations no longer holds for the analysis of resonance of a plasmonic metal nanosphere. The conventional Mie's solution is based on the spherical Bessel and the spherical Hankel functions of an outgoing wave, whereas the permittivity of metals of a negative real part leads to a phase velocity that directs inward to the sphere, which is opposite from the direction of the energy flow as often discussed for negative-index metamaterials. This is a fundamental problem overlooked for a long time; a correction can be found from the viewpoint of a time-reversal problem involving negative permittivity media. The continuity of the field solution at the sphere surface is shown to be corrected by replacing the spherical Hankel function of an outgoing wave with that of an incoming wave, i.e., by adopting the complex conjugate of the conventional solutions. The corrected theory has been verified by the analyses of various metal nanospheres. In addition, the derivation of the scattering cross sections based on the corrected theory has elucidated that the conservation law of energy holds and that, more importantly, the conventional Mie's solution gives the same amplitude of the cross sections when they are obtained for real, not complex, frequency.
NASA Astrophysics Data System (ADS)
Williams, Robert W.; Schlücker, Sebastian; Hudson, Bruce S.
2008-01-01
A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.
Zheng, Tianyu; Bott, Steven; Huo, Qun
2016-08-24
Gold nanoparticles (AuNPs) have found broad applications in chemical and biological sensing, catalysis, biomolecular imaging, in vitro diagnostics, cancer therapy, and many other areas. Dynamic light scattering (DLS) is an analytical tool used routinely for nanoparticle size measurement and analysis. Due to its relatively low cost and ease of operation in comparison to other more sophisticated techniques, DLS is the primary choice of instrumentation for analyzing the size and size distribution of nanoparticle suspensions. However, many DLS users are unfamiliar with the principles behind the DLS measurement and are unware of some of the intrinsic limitations as well as the unique capabilities of this technique. The lack of sufficient understanding of DLS often leads to inappropriate experimental design and misinterpretation of the data. In this study, we performed DLS analyses on a series of citrate-stabilized AuNPs with diameters ranging from 10 to 100 nm. Our study shows that the measured hydrodynamic diameters of the AuNPs can vary significantly with concentration and incident laser power. The scattered light intensity of the AuNPs has a nearly sixth order power law increase with diameter, and the enormous scattered light intensity of AuNPs with diameters around or exceeding 80 nm causes a substantial multiple scattering effect in conventional DLS instruments. The effect leads to significant errors in the reported average hydrodynamic diameter of the AuNPs when the measurements are analyzed in the conventional way, without accounting for the multiple scattering. We present here some useful methods to obtain the accurate hydrodynamic size of the AuNPs using DLS. We also demonstrate and explain an extremely powerful aspect of DLS-its exceptional sensitivity in detecting gold nanoparticle aggregate formation, and the use of this unique capability for chemical and biological sensing applications. PMID:27472008
2015-05-22
The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426
Variance reduction techniques for fast Monte Carlo CBCT scatter correction calculations
NASA Astrophysics Data System (ADS)
Mainegra-Hing, Ernesto; Kawrakow, Iwan
2010-08-01
Several variance reduction techniques improving the efficiency of the Monte Carlo estimation of the scatter contribution to a cone beam computed tomography (CBCT) scan were implemented in {\\tt egs\\_cbct}, an EGSnrc-based application for CBCT-related calculations. The largest impact on the efficiency comes from the splitting + Russian Roulette techniques which are described in detail. The fixed splitting technique is outperformed by both the position-dependent importance splitting (PDIS) and the region-dependent importance splitting (RDIS). The superiority of PDIS over RDIS observed for a water phantom with bone inserts is not observed when applying these techniques to a more realistic human chest phantom. A maximum efficiency improvement of several orders of magnitude over an analog calculation is obtained. A scatter calculation combining the reported efficiency gain with a smoothing algorithm is already in the proximity of being of practical use if a medium size computer cluster is available.
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Lee, Hoyeon; Cho, Seungryong
2016-03-01
Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.
Szidarovszky, Tamás; Császár, Attila G.
2015-01-07
The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
Ouyang, Luo; Lee, Huichen Pam; Wang, Jing
2015-01-01
Purpose To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., "blocker") consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV-MV scatter correction. Results Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield unites (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV-MV scatter induced HU inaccuracy and cupping artifacts. PMID:26026484
Fang, Changming; Li, Wun-Fan; Koster, Rik S; Klimeš, Jiří; van Blaaderen, Alfons; van Huis, Marijn A
2015-01-01
Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initio approaches can be applied only to small numbers of atoms, while large numbers of atoms are required for having configurations that are representative of a liquid. Here we show that a high-accuracy value for the electronic band gap of water can be obtained by combining beyond-DFT methods and statistical time-averaging. Liquid water is simulated at 300 K using a plane-wave density functional theory molecular dynamics (PW-DFT-MD) simulation and a van der Waals density functional (optB88-vdW). After applying a self-consistent GW correction the band gap of liquid water at 300 K is calculated as 7.3 eV, in good agreement with recent experimental observations in the literature (6.9 eV). For simulations of phase transformations and chemical reactions in water or aqueous solutions whereby an accurate description of the electronic structure is required, we suggest to use these advanced GW corrections in combination with the statistical analysis of quantum mechanical MD simulations.
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2015-05-01
There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.
Weir, Alexander J; Sayer, Robin; Cheng-Xiang Wang; Parks, Stuart
2015-08-01
Medical phantoms are frequently required to verify image and signal processing systems, and are often used to support algorithm development for a wide range of imaging and blood flow assessments. A phantom with accurate scattering properties is a crucial requirement when assessing the effects of multi-path propagation channels during the development of complex signal processing techniques for Transcranial Doppler (TCD) ultrasound. The simulation of physiological blood flow in a phantom with tissue and blood equivalence can be achieved using a variety of techniques. In this paper, poly (vinyl alcohol) cryogel (PVA-C) tissue mimicking material (TMM) is evaluated in conjunction with a number of potential scattering agents. The acoustic properties of the TMMs are assessed and an acoustic velocity of 1524ms(-1), an attenuation coefficient of (0:49) × 10(-4)fdBm(1)Hz(-1), a characteristic impedance of (1.72) × 10(6)Kgm(-2)s(-1) and a backscatter coefficient of (1.12) × 10(-28)f(4)m(-1)Hz(-4)sr(-1) were achieved using 4 freeze-thaw cycles and an aluminium oxide (Al(2)O(3)) scattering agent. This TMM was used to make an anatomically realistic wall-less flow phantom for studying the effects of multipath propagation in TCD ultrasound.
Multiple-scattering Corrections to Measurements of the Prompt Fission Neutron Spectrum
Taddeucci, T.N.; Haight, R.C.; Lee, H.Y.; Neudecker, D.; O'Donnell, J.M.; White, M.C.; Perdue, B.A.; Devlin, M.; Fotiadis, N.; Ullmann, J.L.; Nelson, R.O.; Bredeweg, T.A.; Rising, M.E.; Sjue, S.K.; Wender, S.A.; Wu, C.Y.; Henderson, R.
2015-01-15
The Chi-Nu project, conducted jointly by LANL and LLNL, aims to measure the shape of the prompt fission neutron spectrum (PFNS) for fission of {sup 239}Pu induced by neutrons from 50 keV to 15 MeV with accuracies of 3–5% in the outgoing energy from 50 keV to 9 MeV and 15% from 9 to 15 MeV. In order to meet this goal, detailed Monte Carlo simulations are being used to assess the importance and effect of every component in the experimental configuration. As part of this effort, we have also simulated some past PFNS measurements to identify possible sources of systematic error. We find that multiple scattering plays an important role in the target geometry, collimators, and detector response and that past experiments probably underestimated the extent of this effect.
Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den
2010-04-15
Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4{+-}1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images
Coulomb Corrections in Deep Inelastic Scattering and the Nuclear Dependence of R =σL /σT
NASA Astrophysics Data System (ADS)
Gaskell, David
2011-04-01
Measurements of Deep Inelastic structure functions from nuclei are typically performed at very high energies, hence effects from the Coulombic acceleration or deceleration of the incident and scattered lepton due to additional protons in a heavy nucleus are typically ignored. However, re-analysis of data taken at SLAC from experiments E140 and E139 indicates that the effect of including Coulomb corrections, while not large, is non-zero and impacts the extracted results non-trivially. In particular, there is a significant impact when these data are used to extrapolate the magnitude of the EMC effect to nuclear matter. In addition, the conclusion from E140 that there is no evidence for a nuclear dependence of R =σL /σT is thrown into question. When combined with recent data from Jefferson Lab, RA -RD at x = 0 . 5 is found to differ from zero by two σ.
Chun, Se Young
2016-03-01
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855
Kerimov, B.K.; Safin, M.Y.
1988-01-01
Second-Born-approximation corrections to the cross section and right-left asymmetry are calculated for scattering of longitudinally polarized electrons by nuclei with arbitrary spin. Besides a purely electromagnetic contribution, the corrections contain an electroweak contribution resulting from interference between the Coulomb moments and the longitudinal and transverse dipole moments of the target nucleus. Simple expressions are obtained for the corrections by evaluating the angular parts of certain integrals in the logarithmic approximation. The behavior of the corrections is studied for the example /sup 11/B for incident-electron energies Eapprox. <200 MeV.
Darby, B J; Todd, T C; Herman, M A
2013-11-01
Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available. PMID:24103081
Darby, B J; Todd, T C; Herman, M A
2013-11-01
Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available.
Zheng, Wenjun; Tekpinar, Mustafa
2011-12-21
Small-angle x-ray scattering (SAXS) is a powerful technique widely used to explore conformational states and transitions of biomolecular assemblies in solution. For accurate model reconstruction from SAXS data, one promising approach is to flexibly fit a known high-resolution protein structure to low-resolution SAXS data by computer simulations. This is a highly challenging task due to low information content in SAXS data. To meet this challenge, we have developed what we believe to be a novel method based on a coarse-grained (one-bead-per-residue) protein representation and a modified form of the elastic network model that allows large-scale conformational changes while maintaining pseudobonds and secondary structures. Our method optimizes a pseudoenergy that combines the modified elastic-network model energy with a SAXS-fitting score and a collision energy that penalizes steric collisions. Our method uses what we consider a new implicit hydration shell model that accounts for the contribution of hydration shell to SAXS data accurately without explicitly adding waters to the system. We have rigorously validated our method using five test cases with simulated SAXS data and three test cases with experimental SAXS data. Our method has successfully generated high-quality structural models with root mean-squared deviation of 1 ∼ 3 Å from the target structures.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
NASA Astrophysics Data System (ADS)
Kim, J. H.; Kim, S. W.; Yoon, S. C.; Park, R.; Ogren, J. A.
2014-12-01
Filter-based instrument, such as aethalometer, is being widely used to measure equivalent black carbon(EBC) mass concentration and aerosol absorption coefficient(AAC). However, many other previous studies have poited that AAC and its aerosol absorption angstrom exponent(AAE) are strongly affected by the multi-scattering correction factor(C) when we retrieve AAC from aethalometer EBC mass concentration measurement(Weingartner et al., 2003; Arnott et al., 2005; Schmid et al., 2006; Coen et al., 2010). We determined the C value using the method given in Weingartner et al. (2003) by comparing 7-wavelngth aethalometer (AE-31, Magee sci.) to 3-wavelength Photo-Acoustic Soot Spectrometer (PASS-3, DMT) at Gosan climate observatory, Korea(GCO) during Cheju ABC plume-asian monsoon experiment(CAPMEX) campaign(August and September, 2008). In this study, C was estimated to be 4.04 ± 1.68 at 532 nm and AAC retrieved with this value was decreased as approximately 100% as than that retrieved with soot case value from Weingartner et al (2003). We compared the AAC determined from aethalomter measurements to that from collocated Continuous Light Absorption Photometer (CLAP) measurements from January 2012 to December 2013 at GCO and found good agreement in both AAC and AAE. This result suggests the determination of site-specific C is crucially needed when we calculate AAC from aethalometer measurements.
Hawke, J.; Scannell, R.; Maslov, M.; Migozzi, J. B.; Collaboration: JET-EFDA Contributors
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. The application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.
Hawke, J; Scannell, R; Maslov, M; Migozzi, J B
2013-10-01
This work isolated the cause of the observed discrepancy between the electron temperature (T(e)) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. The application of these correction factors resulted in an increase in the observed T(e), resulting in the partial if not complete removal of the observed discrepancy in the measured T(e) between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.
NASA Astrophysics Data System (ADS)
Dickerson, Edward C.
Quality assurance in radiation oncology treatment planning requires independent verification of dose to be delivered to a patient through "second check" calculations for simple plans as well as planar dose fluence measurements for more complex treatments, such as intensity modulated radiation treatments (IMRT). Discrepancies between treatment planning system (TPS) and second check calculations created a need for treatment plan verification using a two dimensional diode array for Enhanced Dynamic Wedge (EDW) fields. While these measurements met clinical standards for treatment, they revealed room for improvement in the EDW model. The purpose of this study is to analyze the head scatter and jaw transmission effects of the moving jaw in EDW fields by measuring dose profiles with a two dimensional diode array in order to minimize differences between the manufacturer provided fluence table (Golden Segmented Treatment Table) and actual machine output. The jaw transmission effect reduces the dose gradient in the wedge direction due to transmission photons adding dose to the heel region of the field. The head scatter effect also reduces the gradient in the dose profile due to decreased accelerator output at increasingly smaller field sizes caused by the moving jaw. The field size continuously decreases with jaw motion, and thus the toe region of the wedge receives less dose than anticipated due to less head scatter contribution for small field sizes. The Golden Segmented Treatment Table (GSTT) does not take these factors into account since they are specific to each individual machine. Thus, these factors need to be accounted for in the TPS to accurately model the gradient of the wedge. The TPS used in this clinic uses one correction factor (transmission factor) to account for both effects since both factors reduce the dose gradient of the wedge. Dose profile measurements were made for 5x5 cm2, 10x10 cm2, and 20x20 cm2 field sizes with open fields and 10°, 15°, 20°, 25
Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A
2013-01-01
Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.
Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C
2014-06-15
Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively.
NASA Astrophysics Data System (ADS)
Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.
2007-05-01
The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere
Hua, Dengxin; Uchida, Masaru; Kobayashi, Takao
2005-03-01
A Rayleigh-Mie-scattering lidar system at an eye-safe 355-nm ultraviolet wavelength that is based on a high-spectral-resolution lidar technique is demonstrated for measuring the vertical temperature profile of the troposphere. Two Rayleigh signals, which determine the atmospheric temperature, are filtered with two Fabry-Perot etalon filters. The filters are located on the same side of the wings of the Rayleigh-scattering spectrum and are optically constructed with a dual-pass optical layout. This configuration achieves a high rejection rate for Mie scattering and reasonable transmission for Rayleigh scattering. The Mie signal is detected with a third Fabry-Perot etalon filter, which is centered at the laser frequency. The filter parameters were optimized by numerical calculation; the results showed a Mie rejection of approximately -45 dB, and Rayleigh transmittance greater than 1% could be achieved for the two Rayleigh channels. A Mie correction method is demonstrated that uses an independent measure of the aerosol scattering to correct the temperature measurements that have been influenced by the aerosols and clouds. Simulations and preliminary experiments have demonstrated that the performance of the dual-pass etalon and Mie correction method is highly effective in practical applications. Simulation results have shown that the temperature errors that are due to noise are less than 1 K up to a height of 4 km for daytime measurement for 300 W m(-2) sr(-1) microm(-1) sky brightness with a lidar system that uses 200 mJ of laser energy, a 3.5-min integration time, and a 25-cm telescope.
NASA Astrophysics Data System (ADS)
Rosenberg, P. D.; Dean, A. R.; Williams, P. I.; Dorsey, J. R.; Minikin, A.; Pickering, M. A.; Petzold, A.
2012-05-01
Optical particle counters (OPCs) are used regularly for atmospheric research, measuring particle scattering cross sections to generate particle size distribution histograms. This manuscript presents two methods for calibrating OPCs with case studies based on a Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP), both of which are operated on the Facility for Airborne Atmospheric Measurements BAe-146 research aircraft. A probability density function based method is provided for modification of the OPC bin boundaries when the scattering properties of measured particles are different to those of the calibration particles due to differences in refractive index or shape. This method provides mean diameters and widths for OPC bins based upon Mie-Lorenz theory or any other particle scattering theory, without the need for smoothing, despite the highly nonlinear and non-monotonic relationship between particle size and scattering cross section. By calibrating an OPC in terms of its scattering cross section the optical properties correction can be applied with minimal information loss, and performing correction in this manner provides traceable and transparent uncertainty propagation throughout the whole process. Analysis of multiple calibrations has shown that for the PCASP the bin centres differ by up to 30% from the manufacturer's nominal values and can change by up to approximately 20% when routine maintenance is performed. The CDP has been found to be less sensitive than the manufacturer's specification with differences in sizing of between 1.6 ± 0.8 μm and 4.7 ± 1.8 μm for one flight. Over the course of the Fennec project in the Sahara the variability of calibration was less than the calibration uncertainty in 6 out of 7 calibrations performed. As would be expected from Mie-Lorenz theory, the impact of the refractive index corrections has been found to be largest for absorbing materials and the impact on Saharan dust measurements made
Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo
2015-11-01
The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm. PMID:26596202
Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo
2015-11-01
The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm.
Bouayed, N.; Boudjema, F.
2008-01-01
We calculate the electroweak and QCD corrections to W{sup -}W{sup +}{yields}tt and ZZ{yields}tt. We also consider the interplay of these corrections with the effect of anomalous interactions that affect the massive weak bosons and the top. The results at the VV level fusion are convoluted with the help of the effective vector boson approximation to give predictions for a high energy e{sup +}e{sup -} collider.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
Graudenz, D. )
1994-04-01
Jet cross sections in deeply inelastic scattering in the case of transverse photon exchange for the production of (1+1) and (2+1) jets are calculated in next-to-leading-order QCD (here the +1'' stands for the target remnant jet, which is included in the jet definition). The jet definition scheme is based on a modified JADE cluster algorithm. The calculation of the (2+1) jet cross section is described in detail. Results for the virtual corrections as well as for the real initial- and final-state corrections are given explicitly. Numerical results are stated for jet cross sections as well as for the ratio [sigma][sub (2+1) jet]/[sigma][sub tot] that can be expected at E665 and DESY HERA. Furthermore the scale ambiguity of the calculated jet cross sections is studied and different parton density parametrizations are compared.
Frolov, Alexei M.; Wardlaw, David M.
2014-09-14
Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.
NASA Astrophysics Data System (ADS)
Yuen, W. W.; Dunaway, W.
1985-06-01
A successive approximation procedure is developed to determine the scattering correction to the Beer-Lambert law in the evaluation of geometric mean transmittance in a general multi-dimensional absorbing and scattering medium. At each step of the approximation, the evaluation of an upper and lower bound of the scattering correction requires only a single integral over the volume of the scattering medium. This represents a great reduction in mathematical complexity compared to the direct numerical approach. First-order results for a two-dimensional rectangular absorbing and scattering medium are presented. They are shown to be quite accurate in the optically thin limit and useful for engineering application for media with arbitrary optical thickness. Some interesting conclusions concerning the qualitative physical behavior of the scattering correction are also generated.
NASA Astrophysics Data System (ADS)
Holstensson, M.; Erlandsson, K.; Poludniowski, G.; Ben-Haim, S.; Hutton, B. F.
2015-04-01
An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of 99mTc and 123I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of 99mTc and 123I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both 99mTc and 123I, results show that the estimated spatial distribution of events from 99mTc in the 99mTc photopeak energy window is very similar to that measured in a single 99mTc phantom study. The extracted images of primary events display increased cold lesion contrasts for both 99mTc and 123I.
NASA Astrophysics Data System (ADS)
Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling
2016-10-01
In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\
Schowalter, M; Müller, K; Rosenauer, A
2012-01-01
Modified atomic scattering amplitudes (MASAs), taking into account the redistribution of charge due to bonds, and the respective correction factors considering the effect of static atomic displacements were computed for the chemically sensitive 002 reflection for ternary III-V and II-VI semiconductors. MASAs were derived from computations within the density functional theory formalism. Binary eight-atom unit cells were strained according to each strain state s (thin, intermediate, thick and fully relaxed electron microscopic specimen) and each concentration (x = 0, …, 1 in 0.01 steps), where the lattice parameters for composition x in strain state s were calculated using continuum elasticity theory. The concentration dependence was derived by computing MASAs for each of these binary cells. Correction factors for static atomic displacements were computed from relaxed atom positions by generating 50 × 50 × 50 supercells using the lattice parameter of the eight-atom unit cells. Atoms were randomly distributed according to the required composition. Polynomials were fitted to the composition dependence of the MASAs and the correction factors for the different strain states. Fit parameters are given in the paper.
Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )
1991-11-15
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.
Bistatic scattering from a cone frustum
NASA Technical Reports Server (NTRS)
Ebihara, W.; Marhefka, R. J.
1986-01-01
The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.
Two-loop master integrals for the mixed EW-QCD virtual corrections to Drell-Yan scattering
NASA Astrophysics Data System (ADS)
Bonciani, Roberto; Di Vita, Stefano; Mastrolia, Pierpaolo; Schubert, Ulrich
2016-09-01
We present the calculation of the master integrals needed for the two-loop QCD × EW corrections to q+overline{q}to {l}-+{l}+ and q+overline{q}^'to {l}-+overline{ν} , for massless external particles. We treat the W and Z bosons as degenerate in mass. We identify three types of diagrams, according to the presence of massive internal lines: the no-mass type, the one-mass type, and the two-mass type, where all massive propagators, when occurring, contain the same mass value. We find a basis of 49 master integrals and evaluate them with the method of the differential equations. The Magnus exponential is employed to choose a set of master integrals that obeys a canonical system of differential equations. Boundary conditions are found either by matching the solutions onto simpler integrals in special kinematic configurations, or by requiring the regularity of the solution at pseudothresholds. The canonical master integrals are finally given as Taylor series around d = 4 space-time dimensions, up to order four, with coefficients given in terms of iterated integrals, respectively up to weight four.
Scattered radiation in flat-detector based cone-beam CT: analysis of voxelized patient simulations
NASA Astrophysics Data System (ADS)
Wiegert, Jens; Bertram, Matthias
2006-03-01
This paper presents a systematic assessment of scattered radiation in flat-detector based cone-beam CT. The analysis is based on simulated scatter projections of voxelized CT images of different body regions allowing to accurately quantify scattered radiation of realistic and clinically relevant patient geometries. Using analytically computed primary projection data of high spatial resolution in combination with Monte-Carlo simulated scattered radiation, practically noise-free reference data sets are computed with and without inclusion of scatter. The impact of scatter is studied both in the projection data and in the reconstructed volume for the head, thorax, and pelvis regions. Currently available anti-scatter grid geometries do not sufficiently compensate scatter induced cupping and streak artifacts, requiring additional software-based scatter correction. The required accuracy of scatter compensation approaches increases with increasing patient size.
Hayashi, Hisashi; Hiraoka, Nozomu
2015-04-30
Using a third-generation synchrotron source (the BL12XU beamline at SPring-8), inelastic X-ray scattering (IXS) spectra of liquid water and liquid benzene were measured at energy losses of 1-100 eV with 0.24 eV resolution for small momentum transfers (q) of 0.23 and 0.32 au with ±0.06 au uncertainty for q. For both liquids, the IXS profiles at these values of q converged well after we corrected for multiple scattering, and these results confirmed the dipole approximation for q ≤ ∼0.3 au. Several dielectric and optical functions [including the optical oscillator strength distribution (OOS), the optical energy-loss function (OLF), the complex dielectric function, the complex index of refraction, and the reflectance] in the vacuum ultraviolet region were derived and tabulated from these small-angle (small q) IXS spectra. These new data were compared with previously obtained results, and this comparison demonstrated the strong reproducibility and accuracy of IXS spectroscopy. For both water and benzene, there was a notable similarity between the OOSs of the liquids and amorphous solids, and there was no evidence of plasmon excitation in the OLF. The static structure factor [S(q)] for q ≤ ∼0.3 au was also deduced and suggests that molecular models that include electron correlation effects can serve as a good approximation for the liquid S(q) values over the full range of q.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349
Material-specific analysis using coherent-scatter imaging.
Batchelar, Deidre L; Cunningham, Ian A
2002-08-01
Coherent-scatter computed tomography (CSCT) is a novel imaging method we are developing to produce cross-sectional images based on the low-angle (<10 degrees) scatter properties of tissue. At diagnostic energies, this scatter is primarily coherent with properties dependent upon the molecular structure of the scatterer. This facilitates the production of material-specific maps of each component in a conglomerate. Our particular goal is to obtain quantitative maps of bone-mineral content. A diagnostic x-ray source and image intensifier are used to acquire scatter patterns under first-generation CT geometry. An accurate measurement of the scatter patterns is necessary to correctly identify and quantify tissue composition. This requires corrections for exposure fluctuations, temporal lag in the intensifier, and self-attenuation within the specimen. The effect of lag is corrected using an approximate convolution method. Self-attenuation causes a cupping artifact in the CSCT images and is corrected using measurements of the transmitted primary beam. An accurate correction is required for reliable density measurements from material-specific images. The correction is shown to introduce negligible noise to the images and a theoretical expression for CSCT image SNR is confirmed by experiment. With these corrections, the scatter intensity is proportional to the number of scattering centers interrogated and quantitative measurements of each material (in g/cm3) are obtained. Results are demonstrated using both a series of poly(methyl methacrylate) (PMMA) sheets of increasing thickness (2-12 mm) and a series of 5 acrylic rods containing varying amounts of hydroxyapatite (0-0.400 g/cm3), simulating the physiological range of bone-mineral density (BMD) found in trabecular bone. The excellent agreement between known and measured BMD demonstrates the viability of CSCT as a tool for densitometry.
Osbahr, Inga; Krause, Joachim; Bachmann, Kai; Gutzmer, Jens
2015-10-01
Identification and accurate characterization of platinum-group minerals (PGMs) is usually a very cumbersome procedure due to their small grain size (typically below 10 µm) and inconspicuous appearance under reflected light. A novel strategy for finding PGMs and quantifying their composition was developed. It combines a mineral liberation analyzer (MLA), a point logging system, and electron probe microanalysis (EPMA). As a first step, the PGMs are identified using the MLA. Grains identified as PGMs are then marked and coordinates recorded and transferred to the EPMA. Case studies illustrate that the combination of MLA, point logging, and EPMA results in the identification of a significantly higher number of PGM grains than reflected light microscopy. Analysis of PGMs by EPMA requires considerable effort due to the often significant overlaps between the X-ray spectra of almost all platinum-group and associated elements. X-ray lines suitable for quantitative analysis need to be carefully selected. As peak overlaps cannot be avoided completely, an offline overlap correction based on weight proportions has been developed. Results obtained with the procedure proposed in this study attain acceptable totals and atomic proportions, indicating that the applied corrections are appropriate.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during the CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2015-08-01
The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in
NASA Astrophysics Data System (ADS)
Zhang, T.; Zhou, L.; Tong, S.
2015-12-01
The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
Approximations for photoelectron scattering
NASA Astrophysics Data System (ADS)
Fritzsche, V.
1989-04-01
The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.
Evaluation of QNI corrections in porous media applications
NASA Astrophysics Data System (ADS)
Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.
2011-09-01
Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
Atmospheric correction of high resolution land surface images
NASA Technical Reports Server (NTRS)
Diner, D. J.; Martonchik, J. V.; Danielson, E. D.; Bruegge, C. J.
1989-01-01
Algorithms to correct for atmospheric-scattering effects in high-spatial resolution land-surface images require the ability to perform rapid and accurate computations of the top-of-atmosphere diffuse radiance field for arbitrarily general surface reflectance distributions (which may be both heterogeneous and non-Lambertian) and atmospheric models. Using three-dimensional radiative transfer (3DRT) theory algorithms are being developed. The methodology used to perform the 3DRT calculations is described. It is shown how these calculations are used to perform atmospheric corrections, and the sensitivity of the retrieved surface reflectances to atmospheric structural parameters is illustrated.
Effects of scatter radiation on reconstructed images in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Liu, Bob; Li, Xinhua
2009-02-01
We evaluated the effects of scatter radiation on the reconstructed images in digital breast tomosynthesis. Projection images of a 6 cm anthropomorphic breast phantom were acquired using a Hologic prototype digital breast tomosynthesis system. Scatter intensities in projection images were sampled with a beam stop method. The scatter intensity at any pixel was obtained by two dimensional fitting. Primary-only projection images were generated by subtracting the scatter contributions from the original projection images. The 3-dimensional breast was reconstructed first based on original projection images which contained the contributions from both primary rays and scattered radiation using three different reconstruction algorithms. The same breast volume was reconstructed again using the same algorithms but based on primaryonly projection images. The image artifacts, pixel value difference to noise ratio (PDNR), and detected image features in these two sets of reconstructed slices were compared to evaluate the effects of scatter radiation. It was found that the scatter radiation caused inaccurate reconstruction of the x-ray attenuation property of the tissue. X-ray attenuation coefficients could be significantly underestimated in the region where scatter intensity is high. This phenomenon is similar to the cupping artifacts found in computed tomography. The scatter correction is important if accurate x-ray attenuation of the tissues is needed. No significant improvement in terms of numbers of detected image features was observed after scatter correction. More sophisticated phantom dedicated to digital breast tomosynthesis may be needed for further evaluation.
Trinquier, Anne; Touboul, Mathieu; Walker, Richard J
2016-02-01
Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.
Mann, Steve D.; Tornai, Martin P.
2015-01-01
Abstract. The objective was to characterize the changes seen from incident Monte Carlo-based scatter distributions in dedicated three-dimensional (3-D) breast single-photon emission computed tomography, with emphasis on the impact of scatter correction using the dual-energy window (DEW) method. Changes in scatter distributions with 3-D detector position were investigated for prone breast imaging with an ideal detector. Energy spectra within a high-energy scatter window measured from simulations were linearly fit, and the slope was used to characterize scatter distributions. The impact of detector position on the measured scatter fraction within various photopeak windows and the k value (ratio of scatter within the photopeak and scatter energy windows) useful for scatter correction was determined. Results indicate that application of a single k value with the DEW method in the presence of anisotropic object scatter distribution is not appropriate for trajectories including the heart and liver. The scatter spectra’s slope demonstrates a strong correlation to measured k values. Reconstructions of fixed-tilt 3-D acquisition trajectories with a single k value show quantification errors up to 5% compared to primary-only reconstructions. However, a variable-tilt trajectory provides improved sampling and minimizes quantification errors, and thus allows for a single k value to be used with the DEW method leading to more accurate quantification. PMID:26839906
Political Correctness--Correct?
ERIC Educational Resources Information Center
Boase, Paul H.
1993-01-01
Examines the phenomenon of political correctness, its roots and objectives, and its successes and failures in coping with the conflicts and clashes of multicultural campuses. Argues that speech codes indicate failure in academia's primary mission to civilize and educate through talk, discussion, thought,166 and persuasion. (SR)
NASA Astrophysics Data System (ADS)
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
NASA Astrophysics Data System (ADS)
Shah, Jainil; Pachon, Jan H.; Madhav, Priti; Tornai, Martin P.
2011-03-01
With a dedicated breast CT system using a quasi-monochromatic x-ray source and flat-panel digital detector, the 2D and 3D scatter to primary ratios (SPR) of various geometric phantoms having different densities were characterized in detail. Projections were acquired using geometric and anthropomorphic breast phantoms. Each phantom was filled with 700ml of 5 different water-methanol concentrations to simulate effective boundary densities of breast compositions from 100% glandular (1.0g/cm3) to 100% fat (0.79g/cm3). Projections were acquired with and without a beam stop array. For each projection, 2D scatter was determined by cubic spline interpolating the values behind the shadow of each beam stop through the object. Scatter-corrected projections were obtained by subtracting the scatter, and the 2D SPRs were obtained as a ratio of the scatter to scatter-corrected projections. Additionally the (un)corrected data were individually iteratively reconstructed. The (un)corrected 3D volumes were subsequently subtracted, and the 3D SPRs obtained from the ratio of the scatter volume-to-scatter-corrected (or primary) volume. Results show that the 2D SPR values peak in the center of the volumes, and were overall highest for the simulated 100% glandular composition. Consequently, scatter corrected reconstructions have visibly reduced cupping regardless of the phantom geometry, as well as more accurate linear attenuation coefficients. The corresponding 3D SPRs have increased central density, which reduces radially. Not surprisingly, for both 2D and 3D SPRs there was a dependency on both phantom geometry and object density on the measured SPR values, with geometry dominating for 3D SPRs. Overall, these results indicate the need for scatter correction given different geometries and breast densities that will be encountered with 3D cone beam breast CT.
Refining atmospheric correction for aquatic remote spectroscopy
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.
2015-12-01
Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Electromagnetic wave scattering by Schwarzschild black holes.
Crispino, Luís C B; Dolan, Sam R; Oliveira, Ednilton S
2009-06-12
We analyze the scattering of a planar monochromatic electromagnetic wave incident upon a Schwarzschild black hole. We obtain accurate numerical results from the partial wave method for the electromagnetic scattering cross section and show that they are in excellent agreement with analytical approximations. The scattering of electromagnetic waves is compared with the scattering of scalar, spinor, and gravitational waves. We present a unified picture of the scattering of all massless fields for the first time. PMID:19658920
Modeling transmission and scatter for photon beam attenuators.
Ahnesjö, A; Weber, L; Nilsson, P
1995-11-01
The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop models for calculation of scatter and transmission for photon beam attenuators such as compensating filters, wedges, and block trays. The attenuation of the beam is calculated using a spectrum of the beam, and a correction factor based on attenuation measurements. Small angle coherent scatter and electron binding effects on scattering cross sections are considered by use of a correction factor. Quality changes in beam penetrability and energy fluence to dose conversion are modeled by use of the calculated primary beam spectrum after passage through the attenuator. The beam spectra are derived by the depth dose effective method, i.e., by minimizing the difference between measured and calculated depth dose distributions, where the calculated distributions are derived by superposing data from a database for monoenergetic photons. The attenuator scatter is integrated over the area viewed from the calculation point of view using first scatter theory. Calculations are simplified by replacing the energy and angular-dependent cross-section formulas with the forward scatter constant r2(0) and a set of parametrized correction functions. The set of corrections include functions for the Compton energy loss, scatter attenuation, and secondary bremsstrahlung production. The effect of charged particle contamination is bypassed by avoiding use of dmax for absolute dose calibrations. The results of the model are compared with scatter measurements in air for copper and lead filters and with dose to a water phantom for lead filters for 4 and 18 MV. For attenuated beams, downstream of the buildup region, the calculated results agree with measurements on the 1.5% level. The accuracy was slightly less in situations
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Artemyev, A. V.; Mourenas, D.; Krasnoselskikh, V. V.
2015-06-15
In this paper, we study relativistic electron scattering by fast magnetosonic waves. We compare results of test particle simulations and the quasi-linear theory for different spectra of waves to investigate how a fine structure of the wave emission can influence electron resonant scattering. We show that for a realistically wide distribution of wave normal angles θ (i.e., when the dispersion δθ≥0.5{sup °}), relativistic electron scattering is similar for a wide wave spectrum and for a spectrum consisting in well-separated ion cyclotron harmonics. Comparisons of test particle simulations with quasi-linear theory show that for δθ>0.5{sup °}, the quasi-linear approximation describes resonant scattering correctly for a large enough plasma frequency. For a very narrow θ distribution (when δθ∼0.05{sup °}), however, the effect of a fine structure in the wave spectrum becomes important. In this case, quasi-linear theory clearly fails in describing accurately electron scattering by fast magnetosonic waves. We also study the effect of high wave amplitudes on relativistic electron scattering. For typical conditions in the earth's radiation belts, the quasi-linear approximation cannot accurately describe electron scattering for waves with averaged amplitudes >300 pT. We discuss various applications of the obtained results for modeling electron dynamics in the radiation belts and in the Earth's magnetotail.
NASA Astrophysics Data System (ADS)
Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas
2016-04-01
The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient
Inelastic scattering in condensed matter with high intensity Moessbauer radiation
NASA Astrophysics Data System (ADS)
Yelon, W. B.; Schupp, G.
1991-05-01
We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is not fully operational at the University of Missouri Research Reactor (MURR) as well as a facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using Bragg scattering filters to suppress unwanted radiation. These have led to a Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to make a novel independent determination of interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na metal and the charge density wave satellite reflection Debye-Waller factor in TaS2, which indicate phason rather than phonon behavior. Using a specially constructed sample cell which enables us to vary temperatures from -10 C to 110 C, we have begun quasielastic diffusion studies in viscous liquids and current results are summarized. Included are the temperature and Q dependence of the scattering in pentadecane and diffusion in glycerol.
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean J; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
Integral method of wall interference correction in low-speed wind tunnels
NASA Technical Reports Server (NTRS)
Zhou, Changhai
1987-01-01
The analytical solution of Poisson's equation, derived form the definition of vortex, was applied to the calculation of interference velocities due to the presence of wind tunnel walls. This approach, called the Integral Method, allows an accurate evaluation of wall interference for separated or more complicated flows without the need for considering any features of the model. All the information necessary for obtaining the wall correction is contained in wall pressure measurements. The correction is not sensitive to normal data-scatter, and the computations are fast enough for on-line data processing.
A review of the kinetic detail required for accurate predictions of normal shock waves
NASA Technical Reports Server (NTRS)
Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.
1991-01-01
Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.
NASA Astrophysics Data System (ADS)
Kedziera, Dariusz; Mentel, Łukasz; Żuchowski, Piotr S.; Knoop, Steven
2015-06-01
We have obtained accurate ab initio +4Σ quartet potentials for the diatomic metastable triplet helium+alkali-metal (Li, Na, K, Rb) systems, using all-electron restricted open-shell coupled cluster singles and doubles with noniterative triples corrections CCSD(T) calculations and accurate calculations of the long-range C6 coefficients. These potentials provide accurate ab initio quartet scattering lengths, which for these many-electron systems is possible, because of the small reduced masses and shallow potentials that result in a small amount of bound states. Our results are relevant for ultracold metastable triplet helium+alkali-metal mixture experiments.
Object shape dependent scatter simulations for PET
Barney, J.S.; Rogers, J.G.; Harrop, R. ); Hoverath, H. )
1991-04-01
This paper reports on the increased scatter fraction seen in positron volume imaging when compared with conventional slice positron emission tomography which has created a need for better characterization and correction of scattered gamma rays for positron imaging. An analytical simulation of single scattered gamma rays and an extension to estimate multiple scattered rays were verified using Monte Carlo simulation. The Monte Carlo simulation was itself verified using measured tomography data. The analytical simulation was used to study some cases of interest for scatter correction.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
A New Polyethylene Scattering Law Determined Using Inelastic Neutron Scattering
Lavelle, Christopher M; Liu, C; Stone, Matthew B
2013-01-01
Monte Carlo neutron transport codes such as MCNP rely on accurate data for nuclear physics cross-sections to produce accurate results. At low energy, this takes the form of scattering laws based on the dynamic structure factor, S (Q, E). High density polyethylene (HDPE) is frequently employed as a neutron moderator at both high and low temperatures, however the only cross-sections available are for T =300 K, and the evaluation has not been updated in quite some time. In this paper we describe inelastic neutron scattering measurements on HDPE at 5 and 300 K which are used to improve the scattering law for HDPE. We describe the experimental methods, review some of the past HDPE scattering laws, and compare computations using these models to the measured S (Q, E). The total cross-section is compared to available data, and the treatment of the carbon secondary scatterer as a free gas is assessed. We also discuss the use of the measurement itself as a scattering law via the 1 phonon approximation. We show that a scattering law computed using a more detailed model for the Generalized Density of States (GDOS) compares more favorably to this experiment, suggesting that inelastic neutron scattering can play an important role in both the development and validation of new scattering laws for Monte Carlo work.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Analysis of particulates for very light elements by forward scattering of alpha particles
Wolfe, G.W.
1980-09-01
PIXE analysis is limited to elements heavier than sodium. A technique has been developed for obtaining quantitative information about the levels of elements hydrogen through flourine by forward scattering of 18 MeV alphas, and may be obtained simultaneously with PIXE. Using substrate thicknesses less than 1 mg/cm/sup 2/, sensitivities from 2.7 ..mu..g/cm/sup 2/ for hydrogen to 124 ..mu..g/cm/sup 2/ for carbon, may be obtained after corrections, with determinations accurate to +- 15%, in 200 second irradiation times. Substantial corrections must be made.
The Surface Wave Scattering-Microwave Scanner (SWS-MS)
NASA Astrophysics Data System (ADS)
Geffrin, Jean-Michel; Chamtouri, Maha; Merchiers, Olivier; Tortel, Hervé; Litman, Amélie; Bailly, Jean-Sébastien; Lacroix, Bernard; Francoeur, Mathieu; Vaillon, Rodolphe
2016-01-01
The Surface Wave Scattering-Microwave Scanner (SWS-MS) is a device that allows the measurement of the electromagnetic fields scattered by objects totally or partially submerged in surface waves. No probe is used to illuminate the sample, nor to guide or scatter the local evanescent waves. Surface waves are generated by total internal reflection and the amplitude and phase of the fields scattered by the samples are measured directly, both in the far-field and the near-field regions. The device's principles and their practical implementation are described in details. The surface wave generator is assessed by measuring the spatial distribution of the electric field above the surface. Drift correction and the calibration method for far-field measurements are explained. Comparison of both far-field and near-field measurements against simulation data shows that the device provides accurate results. This work suggests that the SWS-MS can be used for producing experimental reference data, for supporting a better understanding of surface wave scattering, for assisting in the design of near-field optical or infrared systems thanks to the scale invariance rule in electrodynamics, and for performing nondestructive control of defects in materials.
Universal quantification of elastic scattering effects in AES and XPS
NASA Astrophysics Data System (ADS)
Jablonski, Aleksander
1996-09-01
Elastic scattering of photoelectrons in a solid can be accounted for in the common formalism of XPS by introducing two correction factors, βeff and Qx. In the case of AES, only one correction factor, QA, is required. As recently shown, relatively simple analytical expressions for the correction factors can be derived from the kinetic Boltzmann equation within the so-called "transport approximation". The corrections are expressed here in terms of the ratio of the transport mean free path (TRMFP) to the inelastic mean free path (IMFP). Since the available data for the TRMFP are rather limited, it was decided to complete an extensive database of these values. They were calculated in the present work for the same elements and energies as in the IMFP tabulation published by Tanuma et al. An attempt has been made to derive a predictive formula providing the ratios of the TRMFP to the IMFP. Consequently, a very simple and accurate algorithm for calculating the correction factors βeff, Qx and QA has been developed. This algorithm can easily be generalized to multicomponent solids. The resulting values of the correction factors were found to compare very well with published values resulting from Monte Carlo calculations.
NASA Astrophysics Data System (ADS)
Tyynelä, J.; Leinonen, J.; Westbrook, C. D.; Moisseev, D.; Nousiainen, T.
2013-02-01