Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Monte Carlo scatter correction for SPECT
NASA Astrophysics Data System (ADS)
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Asymmetric scatter kernels for software-based scatter correction of gridless mammography
NASA Astrophysics Data System (ADS)
Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh
2015-03-01
Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.
Onboard Autonomous Corrections for Accurate IRF Pointing.
NASA Astrophysics Data System (ADS)
Jorgensen, J. L.; Betto, M.; Denver, T.
2002-05-01
filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.
Scatter corrections for cone beam optical CT
NASA Astrophysics Data System (ADS)
Olding, Tim; Holmes, Oliver; Schreiner, L. John
2009-05-01
Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.
Scattering corrections in neutron radiography using point scattered functions
NASA Astrophysics Data System (ADS)
Kardjilov, N.; de Beer, F.; Hassanein, R.; Lehmann, E.; Vontobel, P.
2005-04-01
Scattered neutrons cause distortions and blurring in neutron radiography pictures taken at small distances between the investigated object and the detector. This defines one of the most significant problems in quantitative neutron radiography. The quantification of strong scattering materials such as hydrogenous materials—water, oil, plastic, etc.—with a high precision is very difficult due to the scattering effect in the radiography images. The scattering contribution in liquid test samples (H 2O, D 2O and a special type oil ISOPAR L) at different distances between the samples and the detector, the so-called Point Scattered Function (PScF), was calculated with the help of MCNP-4C Monte Carlo code. Corrections of real experimental data were performed using the calculated PScF. Some of the results as well as the correction algorithm will be presented.
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Dispersion corrections to parity violating electron scattering
Gorchtein, M.; Horowitz, C. J.; Ramsey-Musolf, M. J.
2010-08-04
We consider the dispersion correction to elastic parity violating electron-proton scattering due to {gamma}Z exchange. In a recent publication, this correction was reported to be substantially larger than the previous estimates. In this paper, we study the dispersion correction in greater detail. We confirm the size of the disperion correction to be {approx}6% for the QWEAK experiment designed to measure the proton weak charge. We enumerate parameters that have to be constrained to better than relative 30% in order to keep the theoretical uncertainty for QWEAK under control.
Correction of sunspot intensities for scattered light
NASA Technical Reports Server (NTRS)
Mullan, D. J.
1973-01-01
Correction of sunspot intensities for scattered light usually involves fitting theoretical curves to observed aureoles (Zwaan, 1965; Staveland, 1970, 1972). In this paper we examine the inaccuracies in the determination of scattered light by this method. Earlier analyses are extended to examine uncertainties due to the choice of the expression for limb darkening. For the spread function, we consider Lorentzians and Gaussians for which analytic expressions for the aureole can be written down. Lorentzians lead to divergence and normalization difficulties, and should not be used in scattered light determinations. Gaussian functions are more suitable.
Quadratic electroweak corrections for polarized Moller scattering
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
SPECT Compton-scattering correction by analysis of energy spectra.
Koral, K F; Wang, X Q; Rogers, W L; Clinthorne, N H; Wang, X H
1988-02-01
The hypothesis that energy spectra at individual spatial locations in single photon emission computed tomographic projection images can be analyzed to separate the Compton-scattered component from the unscattered component is tested indirectly. An axially symmetric phantom consisting of a cylinder with a sphere is imaged with either the cylinder or the sphere containing 99mTc. An iterative peak-erosion algorithm and a fitting algorithm are given and employed to analyze the acquired spectra. Adequate separation into an unscattered component and a Compton-scattered component is judged on the basis of filtered-backprojection reconstruction of corrected projections. In the reconstructions, attenuation correction is based on the known geometry and the total attenuation cross section for water. An independent test of the accuracy of separation is not made. For both algorithms, reconstructed slices for the cold-sphere, hot-surround phantom have the correct shape as confirmed by simulation results that take into account the measured dependence of system resolution on depth. For the inverse phantom, a hot sphere in a cold surround, quantitative results with the fitting algorithm are accurate but with a particular number of iterations of the erosion algorithm are less good. (A greater number of iterations would improve the 26% error with the algorithm, however.) These preliminary results encourage us to believe that a method for correcting for Compton-scattering in a wide variety of objects can be found, thus helping to achieve quantitative SPECT. PMID:3258023
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
NASA Astrophysics Data System (ADS)
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung
2014-03-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of
Monte Carlo-based down-scatter correction of SPECT attenuation maps.
Bokulić, Tomislav; Vastenhouw, Brendan; de Jong, Hugo W A M; van Dongen, Alice J; van Rijk, Peter P; Beekman, Freek J
2004-08-01
Combined acquisition of transmission and emission data in single-photon emission computed tomography (SPECT) can be used for correction of non-uniform photon attenuation. However, down-scatter from a higher energy isotope (e.g. 99mTc) contaminates lower energy transmission data (e.g. 153Gd, 100 keV), resulting in underestimation of reconstructed attenuation coefficients. Window-based corrections are often not very accurate and increase noise in attenuation maps. We have developed a new correction scheme. It uses accurate scatter modelling to avoid noise amplification and does not require additional energy windows. The correction works as follows: Initially, an approximate attenuation map is reconstructed using down-scatter contaminated transmission data (step 1). An emission map is reconstructed based on the contaminated attenuation map (step 2). Based on this approximate 99mTc reconstruction and attenuation map, down-scatter in the 153Gd window is simulated using accelerated Monte Carlo simulation (step 3). This down-scatter estimate is used during reconstruction of a corrected attenuation map (step 4). Based on the corrected attenuation map, an improved 99mTc image is reconstructed (step 5). Steps 3-5 are repeated to incrementally improve the down-scatter estimate. The Monte Carlo simulator provides accurate down-scatter estimation with significantly less noise than down-scatter estimates acquired in an additional window. Errors in the reconstructed attenuation coefficients are reduced from ca. 40% to less than 5%. Furthermore, artefacts in 99mTc emission reconstructions are almost completely removed. These results are better than for window-based correction, both in simulation experiments and in physical phantom experiments. Monte Carlo down-scatter simulation in concert with statistical reconstruction provides accurate down-scatter correction of attenuation maps. PMID:15034678
Using BRDFs for accurate albedo calculations and adjacency effect corrections
Borel, C.C.; Gerstl, S.A.W.
1996-09-01
In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.
Correction to Molière's formula for multiple scattering
NASA Astrophysics Data System (ADS)
Lee, R. N.; Milstein, A. I.
2009-06-01
The semiclassical correction to Molière’s formula for multiple scattering is derived. The consideration is based on the scattering amplitude obtained with the first semiclassical correction taken into account for an arbitrary localized but not spherically symmetric potential. Unlike the leading term, the correction to Molière’s formula contains the target density n and thickness L not only in the combination nL (areal density). Therefore, this correction can be referred to as the bulk density correction. It turns out that the bulk density correction is small even for high density. This result explains the wide range of applicability of Molière’s formula.
A spectrally accurate algorithm for electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Ganesh, M.; Hawkins, S.
2006-09-01
In this work we develop, implement and analyze a high-order spectrally accurate algorithm for computation of the echo area, and monostatic and bistatic radar cross-section (RCS) of a three dimensional perfectly conducting obstacle through simulation of the time-harmonic electromagnetic waves scattered by the conductor. Our scheme is based on a modified boundary integral formulation (of the Maxwell equations) that is tolerant to basis functions that are not tangential on the conductor surface. We test our algorithm with extensive computational experiments using a variety of three dimensional perfect conductors described in spherical coordinates, including benchmark radar targets such as the metallic NASA almond and ogive. The monostatic RCS measurements for non-convex conductors require hundreds of incident waves (boundary conditions). We demonstrate that the monostatic RCS of small (to medium) sized conductors can be computed using over one thousand incident waves within a few minutes (to a few hours) of CPU time. We compare our results with those obtained using method of moments based industrial standard three dimensional electromagnetic codes CARLOS, CICERO, FE-IE, FERM, and FISC. Finally, we prove the spectrally accurate convergence of our algorithm for computing the surface current, far-field, and RCS values of a class of conductors described globally in spherical coordinates.
Novel scatter compensation of list-mode PET data using spatial and energy dependent corrections
Guérin, Bastien
2011-01-01
With the widespread use of PET crystals with greatly improved energy resolution (e.g., 11.5% with LYSO as compared to 20% with BGO) and of list-mode acquisitions, the use of the energy of individual events in scatter correction schemes becomes feasible. We propose a novel scatter approach that incorporates the energy of individual photons in the scatter correction and reconstruction of list-mode PET data in addition to the spatial information presently used in clinical scanners. First, we rewrite the Poisson likelihood function of list-mode PET data including the energy distributions of primary and scatter coincidences and show that this expression yields an MLEM reconstruction algorithm containing both energy and spatial dependent corrections. To estimate the spatial distribution of scatter coincidences we use the single scatter simulation (SSS). Next, we derive two new formulae which allow estimation of the 2D (coincidences) energy probability density functions (E-PDF) of primary and scatter coincidences from the 1D (photons) E-PDFs associated with each photon. We also describe an accurate and robust object-specific method for estimating these 1D E-PDFs based on a decomposition of the total energy spectra detected across the scanner into primary and scattered components. Finally, we show that the energy information can be used to accurately normalize the scatter sinogram to the data. We compared the performance of this novel scatter correction incorporating both the position and energy of detected coincidences to that of the traditional approach modeling only the spatial distribution of scatter coincidences in 3D Monte Carlo simulations of a medium cylindrical phantom and a large, non uniform NCAT phantom. Incorporating the energy information in the scatter correction decreased bias in the activity distribution estimation by ~20% and ~40% in the cold regions of the large NCAT phantom at energy resolutions 11.5 and 20% at 511 keV, respectively, compared to when
Practical correction procedures for elastic electron scattering effects in ARXPS
NASA Astrophysics Data System (ADS)
Lassen, T. S.; Tougaard, S.; Jablonski, A.
2001-06-01
Angle-resolved XPS and AES (ARXPS and ARAES) are widely used for determination of the in-depth distribution of elements in the surface region of solids. It is well known that elastic electron scattering has a significant effect on the intensity as a function of emission angle and that this has a great influence on the determined overlayer thicknesses by this method. However the applied procedures for ARXPS and ARAES generally neglect this because no simple and practical procedure for correction has been available. However recently, new algorithms have been suggested. In this paper, we have studied the efficiency of these algorithms to correct for elastic scattering effects in the interpretation of ARXPS and ARAES. This is done by first calculating electron distributions by Monte Carlo simulations for well-defined overlayer/substrate systems and then to apply the different algorithms. We have found that an analytical formula based on a solution of the Boltzmann transport equation provides a good account for elastic scattering effects. However this procedure is computationally very slow and the underlying algorithm is complicated. Another much simpler algorithm, proposed by Nefedov and coworkers, was also tested. Three different ways of handling the scattering parameters within this model were tested and it was found that this algorithm also gives a good description for elastic scattering effects provided that it is slightly modified so that it takes into account the differences in the transport properties of the substrate and the overlayer. This procedure is fairly simple and is described in detail. The model gives a much more accurate description compared to the traditional straight-line approximation (SLA). However it is also found that when attenuation lengths instead of inelastic mean free paths are used in the simple SLA formalism, the effects of elastic scattering are also reasonably well accounted for. Specifically, from a systematic study of several
Accurate source location from P waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.
2015-12-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.
Accurate source location from waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei
2016-06-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.
NASA Astrophysics Data System (ADS)
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Kim, Hee-Joung
2014-08-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, making up the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement- and non-measurement-based methods, have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate the primary radiation. Cylindrical phantoms of variable size were used to quantify the imaging performance. For scatter estimates, we used discrete Fourier transform filtering, e.g., a Gaussian low-high pass filter with a cut-off frequency. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without scatter correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without the correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without the correction. In the subtraction study, the average CNR with the correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of the scatter correction and the
Quantitative fully 3D PET via model-based scatter correction
Ollinger, J.M.
1994-05-01
We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.
Accurate scatter compensation using neural networks in radionuclide imaging
Ogawa, Koichi; Nishizaki, N. . Dept. of Electrical Engineering)
1993-08-01
The paper presents a new method to estimate primary photons using an artificial neural network in radionuclide imaging. The neural network for [sup 99m]Tc had three layers, i.e., one input layer with five units, one hidden layer with five units, and one output layer with two units. As input values to the input units, the authors used count ratios which were the ratios of the counts acquired by narrow windows to the total count acquired by a broad window with the energy range from 125 to 154 keV. The outputs were a scatter count ratio and a primary count ratio. Using the primary count ratio and the total count they calculated the primary count of the pixel directly. The neural network was trained with a back-propagation algorithm using calculated true energy spectra obtained by a Monte Carlo method. The simulation showed that an accurate estimation of primary photons was accomplished within an error ratio of 5% for primary photons.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient
Scatter analysis and correction for ultrafast X-ray tomography.
Wagner, Michael; Barthel, Frank; Zalucky, Johannes; Bieberle, Martina; Hampel, Uwe
2015-06-13
Ultrafast X-ray computed tomography (CT) is an imaging technique with high potential for the investigation of the hydrodynamics in multiphase flows. For correct determination of the phase distribution of such flows, a high accuracy of the reconstructed image data is essential. In X-ray CT, radiation scatter may cause disturbing artefacts. As the scattering is not considered in standard reconstruction algorithms, additional methods are necessary to correct the detector readings or to prevent the detection of scattered photons. In this paper, we present an analysis of the scattering background for the ultrafast X-ray CT imaging system ROFEX at the Helmholtz-Zentrum Dresden-Rossendorf and propose a correction technique based on collimation and deterministic simulation of first-order scattering. PMID:25939622
Lorentz violation correction to the Aharonov-Bohm scattering
NASA Astrophysics Data System (ADS)
Anacleto, M. A.
2015-10-01
In this paper, using a (2 +1 )-dimensional field theory approach, we study the Aharonov-Bohm (AB) scattering with Lorentz symmetry breaking. We obtain the modified scattering amplitude to the AB effect due to the small Lorentz violation correction in the breaking parameter and prove that up to one loop the model is free from ultraviolet divergences.
Some radiative corrections to neutrino scattering: Neutral currents
Jenkins, James P.; Goldman, T.
2009-09-01
With the advent of high precision neutrino scattering experiments comes the need for improved radiative corrections. We present a phenomenological analysis of some contributions to the production of photons in neutrino neutral current scattering that are relevant to experiments subsuming the 1% level of accuracy.
Low dose scatter correction for digital chest tomosynthesis
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2015-03-01
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
Thickness-dependent scatter correction algorithm for digital mammography
NASA Astrophysics Data System (ADS)
Gonzalez Trotter, Dinko E.; Tkaczyk, J. Eric; Kaufhold, John; Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
We have implemented a scatter-correction algorithm (SCA) for digital mammography based on an iterative restoration filter. The scatter contribution to the image is modeled by an additive component that is proportional to the filtered unattenuated x-ray photon signal and dependent on the characteristics of the imaged object. The SCA's result is closer to the scatter-free signal than when a scatter grid is used. Presently, the SCA shows improved contrast-to-noise performance relative to the scatter grid for a breast thickness up to 3.6 cm, with potential for better performance up to 6 cm. We investigated the efficacy of our scatter-correction method on a series of x-ray images of anthropomorphic breast phantoms with maximum thicknesses ranging from 3.0 cm to 6.0 cm. A comparison of the scatter-corrected images with the scatter-free signal acquired using a slit collimator shows average deviations of 3 percent or less, even in the edge region of the phantoms. These results indicate that the SCA is superior to a scatter grid for 2D quantitative mammography applications, and may enable 3D quantitative applications in X-ray tomosynthesis.
Solving outside-axial-field-of-view scatter correction problem in PET via digital experimentation
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Zhu, Yang-Ming; Ye, Jinghan; Song, Xiyun; Hu, Zhiqiang
2016-03-01
Unaccounted scatter impact from unknown outside-axial-field-of-view (outside-AFOV) activity in PET is an important degrading factor for image quality and quantitation. Resource consuming and unpopular way to account for the outside- AFOV activity is to perform an additional PET/CT scan of adjacent regions. In this work we investigate a solution to the outside-AFOV scatter problem without performing a PET/CT scan of the adjacent regions. The main motivation for the proposed method is that the measured random corrected prompt (RCP) sinogram in the background region surrounding the measured object contains only scattered events, originating from both inside- and outside-AFOV activity. In this method, the scatter correction simulation searches through many randomly-chosen outside-AFOV activity estimates along with known inside-AFOV activity, generating a plethora of scatter distribution sinograms. This digital experimentation iterates until a decent match is found between a simulated scatter sinogram (that include supposed outside-AFOV activity) and the measured RCP sinogram in the background region. The combined scatter impact from inside- and outside-AFOV activity can then be used for scatter correction during final image reconstruction phase. Preliminary results using measured phantom data indicate successful phantom length estimate with the method, and, therefore, accurate outside-AFOV scatter estimate.
Proximity corrected accurate in-die registration metrology
NASA Astrophysics Data System (ADS)
Daneshpanah, M.; Laske, F.; Wagner, M.; Roeth, K.-D.; Czerkas, S.; Yamaguchi, H.; Fujii, N.; Yoshikawa, S.; Kanno, K.; Takamizawa, H.
2014-07-01
193nm immersion lithography is the mainstream production technology for the 20nm and 14nm logic nodes. Multi-patterning of an increasing number of critical layers puts extreme pressure on wafer intra-field overlay, to which mask registration error is a major contributor [1]. The International Technology Roadmap for Semiconductors (ITRS [2]) requests a registration error below 4 nm for each mask of a multi-patterning set forming one layer on the wafer. For mask metrology at the 20nm and 14nm logic nodes, maintaining a precision-to-tolerance (P/T) ratio below 0.25 will be very challenging. Full characterization of mask registration errors in the active area of the die will become mandatory. It is well-known that differences in pattern density and asymmetries in the immediate neighborhood of a feature give rise to apparent shifts in position when measured by optical metrology systems, so-called optical proximity effects. These effects can easily be similar in magnitude to real mask placement errors, and uncorrected can result in mis-qualification of the mask. Metrology results from KLA-Tencor's next generation mask metrology system are reported, applying a model-based algorithm [3] which includes corrections for proximity errors. The proximity corrected, model-based measurements are compared to standard measurements and a methodology presented that verifies the correction performance of the new algorithm.
Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.
Gordon, H R; Castaño, D J
1987-06-01
An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
X-ray scatter correction for cone-beam CT using moving blocker array
NASA Astrophysics Data System (ADS)
Zhu, Lei; Strobel, Norbert; Fahrig, Rebecca
2005-04-01
Scatter correction is an active research topic in cone beam computed tomography (CBCT) because CBCT (especially flat-panel detector (FPD) based) systems have large scatter-to-primary ratios. Scatter produces artifact and contrast reduction, and is difficult to model accurately. Direct measurement using a beam blocker array provides accurate scatter estimates. However, since the blocker array also blocks primary radiation, imaging requires a second (or subsequent) scan without the blocker array in place. This approach is inefficient in terms of scanning time and patient dose. To combine accurate scatter estimation and reconstruction into one single scan, a new approach based on an array of moving blockers has been developed. The blocker array moves from projection to projection, such that every detector pixel is not consecutively blocked during the data acquisition, and the missing primary data in the blocker shadows are estimated by interpolation. Using different blocker array trajectories, the algorithm has been evaluated through software phantom studies using Monte Carlo simulations and image processing techniques. Results show that this approach is able to greatly reduce the effect of scatter in the reconstruction. By properly choosing blocker distance and primary data interpolation method, the mean square error of the reconstructed image decreases from 32.3% to 1.13%, and the induced visual artifacts are significantly reduced when a raster-scanning blocker array trajectory is used. Further analysis also shows that artifact arises mostly due to inaccurate scatter estimates, rather than due to interpolation of the primary data.
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Hadron mass corrections in semi-inclusive deep inelastic scattering
A. Accardi, T. Hobbs, W. Melnitchouk
2009-11-01
We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron $h$. The hadron mass correction is made by introducing a generalized, finite-$Q^2$ scaling variable $\\zeta_h$ for the hadron fragmentation function, which approaches the usual energy fraction $z_h = E_h/\
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic
Mie scatter corrections in single cell infrared microspectroscopy.
Konevskikh, Tatiana; Lukacs, Rozalia; Blümel, Reinhold; Ponossov, Arkadi; Kohler, Achim
2016-06-23
Strong Mie scattering signatures hamper the chemical interpretation and multivariate analysis of the infrared microscopy spectra of single cells and tissues. During recent years, several numerical Mie scatter correction algorithms for the infrared spectroscopy of single cells have been published. In the paper at hand, we critically reviewed existing algorithms for the correction of Mie scattering and suggest improvements. We developed an iterative algorithm based on Extended Multiplicative Scatter Correction (EMSC), for the retrieval of pure absorbance spectra from highly distorted infrared spectra of single cells. The new algorithm uses the van de Hulst approximation formula for the extinction efficiency employing a complex refractive index. The iterative algorithm involves the establishment of an EMSC meta-model. While existing iterative algorithms for the correction of resonant Mie scattering employ three independent parameters for establishing a meta-model, we could decrease the number of parameters from three to two independent parameters, which reduced the calculation time for the Mie scattering curves for the iterative EMSC meta-model by a factor of 10. Moreover, by employing the Hilbert transform for evaluating the Kramers-Kronig relations based on a FFT algorithm in Matlab, we further improved the speed of the algorithm by a factor of 100. For testing the algorithm we simulate distorted apparent absorbance spectra by utilizing the exact theory for the scattering of infrared light at absorbing spheres, taking into account the high numerical aperture of infrared microscopes employed for the analysis of single cells and tissues. In addition, the algorithm was applied to measured absorbance spectra of single lung cancer cells. PMID:27034998
Quantum error correction of photon-scattering errors
NASA Astrophysics Data System (ADS)
Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee
2011-05-01
Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.
Radiative corrections to real and virtual muon Compton scattering revisited
NASA Astrophysics Data System (ADS)
Kaiser, N.
2010-06-01
We calculate in closed analytical form the one-photon loop radiative corrections to muon Compton scattering μγ→μγ. Ultraviolet and infrared divergences are both treated in dimensional regularization. Infrared finiteness of the (virtual) radiative corrections is achieved (in the standard way) by including soft photon radiation below an energy cut-off λ. We find that the anomalous magnetic moment α/2π provides only a very small portion of the full radiative corrections. Furthermore, we extend our calculation of radiative corrections to the muon-nucleus bremsstrahlung process (or virtual muon Compton scattering μγ0∗→μγ). These results are particularly relevant for analyzing the COMPASS experiment at CERN in which muon-nucleus bremsstrahlung serves to calibrate the Primakoff scattering of high-energy pions off a heavy nucleus with the aim of measuring the pion electric and magnetic polarizabilities. We find agreement with an earlier calculation of these radiative corrections based on a different method.
Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU.
Kim, Kyung Sang; Ye, Jong Chul
2011-08-01
Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125× and 141× for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT. PMID:21772080
Correcting for Interstellar Scattering Delay in High-precision Pulsar Timing: Simulation Results
NASA Astrophysics Data System (ADS)
Palliyaguru, Nipuni; Stinebring, Daniel; McLaughlin, Maura; Demorest, Paul; Jones, Glenn
2015-12-01
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse "jitter" is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.
Radiative corrections to polarization observables in electron-proton scattering
NASA Astrophysics Data System (ADS)
Borisyuk, Dmitry; Kobushkin, Alexander
2014-08-01
We consider radiative corrections to polarization observables in elastic electron-proton scattering, in particular, for the polarization transfer measurements of the proton form factor ratio μGE/GM. The corrections are of two types: two-photon exchange (TPE) and bremsstrahlung (BS); in the present work we pay special attention to the latter. Assuming small missing energy or missing mass cutoff, the correction can be represented in a model-independent form, with both electron and proton radiation taken into account. Numerical calculations show that the contribution of the proton radiation is not negligible. Overall, at high Q2 and energies, the total correction to μGE/GM grows, but is dominated by TPE. At low energies both TPE and BS may be significant; the latter amounts to ˜0.01 for some reasonable cut-off choices.
Lowest order QED radiative corrections to longitudinally polarized Moeller scattering
Ilyichev, A.; Zykunov, V.
2005-08-01
The total lowest-order electromagnetic radiative corrections to the observables in Moeller scattering of longitudinally polarized electrons have been calculated. The final expressions obtained by the covariant method for the infrared divergency cancellation are free from any unphysical cut-off parameters. Since the calculation is carried out within the ultrarelativistic approximation our result has a compact form that is convenient for computing. Basing on these expressions the FORTRAN code MERA has been developed. Using this code the detailed numerical analysis performed under SLAC (E-158) and JLab kinematic conditions has shown that the radiative corrections are significant and rather sensitive to the value of the missing mass (inelasticity) cuts.
NLO QCD corrections to graviton induced deep inelastic scattering
NASA Astrophysics Data System (ADS)
Stirling, W. J.; Vryonidou, E.
2011-06-01
We consider Next-to-Leading-Order QCD corrections to ADD graviton exchange relevant for Deep Inelastic Scattering experiments. We calculate the relevant NLO structure functions by calculating the virtual and real corrections for a set of graviton interaction diagrams, demonstrating the expected cancellation of the UV and IR divergences. We compare the NLO and LO results at the centre-of-mass energy relevant to HERA experiments as well as for the proposed higher energy lepton-proton collider, LHeC, which has a higher fundamental scale reach.
X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library
Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis
2014-01-01
Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency
Bootsma, G. J.; Verhaegen, F.; Jaffray, D. A.
2015-01-15
suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y. |; Eberl, S.; Nakamura, T.
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Correction of optical absorption and scattering variations in laser speckle rheology measurements
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2014-01-01
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications. PMID:24663983
Correction of optical absorption and scattering variations in Laser Speckle Rheology measurements.
Hajjarian, Zeinab; Nadkarni, Seemantini K
2014-03-24
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications. PMID:24663983
Multiple-scattering corrections to the Beer-Lambert law
Zardecki, A.
1983-01-01
The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled.
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
NASA Astrophysics Data System (ADS)
Robinson, Andrew P.; Tipping, Jill; Cullen, David M.; Hamilton, David
2016-07-01
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for {{}1 7 7} Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities (≤slant 200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification.
Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David
2016-07-21
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for [Formula: see text]Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities ([Formula: see text]200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification. PMID:27351914
Single-scan scatter correction for cone-beam CT using a stationary beam blocker: a preliminary study
NASA Astrophysics Data System (ADS)
Niu, Tianye; Zhu, Lei
2011-03-01
The performance of cone-beam CT (CBCT) is greatly limited by scatter artifacts. The existing measurement-based methods have promising advantages as a standard scatter correction solution, except that they currently require multiple scans or moving the beam blocker during data acquisition to compensate for the missing primary data. These approaches are therefore unpractical in clinical applications. In this work, we propose a new measurement-based scatter correction method to achieve accurate reconstruction with one single scan and a stationary beam blocker, two seemingly incompatible features which enable simple and effective scatter correction without increase of scan time or patient dose. Based on CT reconstruction theory, we distribute the blocked areas over one projection where primary signals are considered to be redundant in a full scan. The CT image quality is not degraded even with primary loss. Scatter is accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction. In a Monte Carlo simulation study, we first optimize the beam blocker geometry using projections on the Shepp-Logan phantom and then carry out a complete simulation of a CBCT scan on a water phantom. With the scatter-to-primary ratio around 1.0, our method reduces the CT number error from 293 to 2.9 Hounsfield unit (HU) around the phantom center. The proposed approach is further evaluated on a CBCT tabletop system. On the Catphan©600 phantom, the reconstruction error is reduced from 202 to 10 HU in the selected region of interest after the proposed correction.
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.
1990-01-01
The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.
Patient-specific scatter correction for flat-panel detector-based cone-beam CT imaging
NASA Astrophysics Data System (ADS)
Zhao, Wei; Brunner, Stephen; Niu, Kai; Schafer, Sebastian; Royalty, Kevin; Chen, Guang-Hong
2015-02-01
A patient-specific scatter correction algorithm is proposed to mitigate scatter artefacts in cone-beam CT (CBCT). The approach belongs to the category of convolution-based methods in which a scatter potential function is convolved with a convolution kernel to estimate the scatter profile. A key step in this method is to determine the free parameters introduced in both scatter potential and convolution kernel using a so-called calibration process, which is to seek for the optimal parameters such that the models for both scatter potential and convolution kernel is able to optimally fit the previously known coarse estimates of scatter profiles of the image object. Both direct measurements and Monte Carlo (MC) simulations have been proposed by other investigators to achieve the aforementioned rough estimates. In the present paper, a novel method has been proposed and validated to generate the needed coarse scatter profile for parameter calibration in the convolution method. The method is based upon an image segmentation of the scatter contaminated CBCT image volume, followed by a reprojection of the segmented image volume using a given x-ray spectrum. The reprojected data is subtracted from the scatter contaminated projection data to generate a coarse estimate of the needed scatter profile used in parameter calibration. The method was qualitatively and quantitatively evaluated using numerical simulations and experimental CBCT data acquired on a clinical CBCT imaging system. Results show that the proposed algorithm can significantly reduce scatter artefacts and recover the correct CT number. Numerical simulation results show the method is patient specific, can accurately estimate the scatter, and is robust with respect to segmentation procedure. For experimental and in vivo human data, the results show the CT number can be successfully recovered and anatomical structure visibility can be significantly improved.
One-loop Electroweak Radiative Corrections for Polarized Møller Scattering
NASA Astrophysics Data System (ADS)
Barkanova, Svetlana; Aleksejevs, Aleksandrs; Ilyichev, Alexander; Kolomensky, Yury; Zykunov, Vladimir
2011-04-01
Møller scattering measurements are a clean, powerful probe of new physics effects. However, before physics of interest can be extracted from the experimental data, radiative corrections must be taken into account very carefully. Using two different approaches, we perform updated and detailed calculations of the complete one-loop set of electroweak radiative corrections to parity violating electron-electron scattering asymmetry at low energies relevant for the ultra-precise 11 GeV MOLLER experiment planned at JLab. Although contributions from some of the self-energies and vertex diagrams calculated in the two approaches can differ significantly, our full gauge-invariant set still guarantees that the total relative weak corrections are in excellent agreement for the two methods of calculation. Our numerical results are presented for a range of experimental cuts and the relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies
2014-01-01
We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys.1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616
A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies.
Truchon, Jean-François; Pettitt, B Montgomery; Labute, Paul
2014-03-11
We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys. 1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU
Fullerton, G D; Keener, C R; Cameron, I L
1994-12-01
The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-01
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e- e- --> e- e- (gamma) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
NASA Astrophysics Data System (ADS)
Singh, Malkiat; Bettenhausen, Michael H.
2011-08-01
Faraday rotation changes the polarization plane of linearly polarized microwaves which propagate through the ionosphere. To correct for ionospheric polarization error, it is necessary to have electron density profiles on a global scale that represent the ionosphere in real time. We use raytrace through the combined models of ionospheric conductivity and electron density (ICED), Bent, and Gallagher models (RIBG model) to specify the ionospheric conditions by ingesting the GPS data from observing stations that are as close as possible to the observation time and location of the space system for which the corrections are required. To accurately calculate Faraday rotation corrections, we also utilize the raytrace utility of the RIBG model instead of the normal shell model assumption for the ionosphere. We use WindSat data, which exhibits a wide range of orientations of the raypath and a high data rate of observations, to provide a realistic data set for analysis. The standard single-shell models at 350 and 400 km are studied along with a new three-shell model and compared with the raytrace method for computation time and accuracy. We have compared the Faraday results obtained with climatological (International Reference Ionosphere and RIBG) and physics-based (Global Assimilation of Ionospheric Measurements) ionospheric models. We also study the impact of limitations in the availability of GPS data on the accuracy of the Faraday rotation calculations.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
NASA Astrophysics Data System (ADS)
Chen, Jingyi; Zebker, Howard A.; Knight, Rosemary
2015-11-01
Interferometric synthetic aperture radar (InSAR) is a radar remote sensing technique for measuring surface deformation to millimeter-level accuracy at meter-scale resolution. Obtaining accurate deformation measurements in agricultural regions is difficult because the signal is often decorrelated due to vegetation growth. We present here a new algorithm for retrieving InSAR deformation measurements over areas with severe vegetation decorrelation using adaptive phase interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. We apply this algorithm to L-band ALOS interferograms acquired over the San Luis Valley, Colorado, and the Tulare Basin, California. In both areas, the pumping of groundwater for irrigation results in deformation of the land that can be detected using InSAR. We show that the PS-based algorithm can significantly reduce the artifacts due to vegetation decorrelation while preserving the deformation signature.
Accurate solution of the proton-hydrogen three-body scattering problem
NASA Astrophysics Data System (ADS)
Abdurakhmanov, I. B.; Kadyrov, A. S.; Bray, I.
2016-02-01
An accurate solution to the fundamental three-body problem of proton-hydrogen scattering including direct scattering and ionization, electron capture and electron capture into the continuum (ECC) is presented. The problem has been addressed using a quantum-mechanical two-center convergent close-coupling approach. At each energy the internal consistency of the solution is demonstrated with the help of single-center calculations, with both approaches converging independently to the same electron-loss cross section. This is the sum of the electron capture, ECC and direct ionization cross sections, which are only obtainable separately in the solution of the problem using the two-center expansion. Agreement with experiment for the electron-capture cross section is excellent. However, for the ionization cross sections some discrepancy exists. Given the demonstrated internal consistency we remain confident in the provided theoretical solution.
Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery
Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.
2009-01-01
Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output. PMID:19610306
Hong Xinguo; Hao Quan
2009-01-15
In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Scatter correction method for cone-beam CT based on interlacing-slit scan
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Zhang, Hua; Shi, Yi-Kai; Zhang, Liang; Xu, Zhe
2014-09-01
Cone-beam computed tomography (CBCT) has the notable features of high efficiency and high precision, and is widely used in areas such as medical imaging and industrial non-destructive testing. However, the presence of the ray scatter reduces the quality of CT images. By referencing the slit collimation approach, a scatter correction method for CBCT based on the interlacing-slit scan is proposed. Firstly, according to the characteristics of CBCT imaging, a scatter suppression plate with interlacing slits is designed and fabricated. Then the imaging of the scatter suppression plate is analyzed, and a scatter correction calculation method for CBCT based on the image fusion is proposed, which can splice out a complete set of scatter suppression projection images according to the interlacing-slit projection images of the left and the right imaging regions in the scatter suppression plate, and simultaneously complete the scatter correction within the flat panel detector (FPD). Finally, the overall process of scatter suppression and correction is provided. The experimental results show that this method can significantly improve the clarity of the slice images and achieve a good scatter correction.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Characterization of image quality for 3D scatter-corrected breast CT images
NASA Astrophysics Data System (ADS)
Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.
2011-03-01
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Complete calculation of electroweak corrections for polarized Møller scattering at high energies
NASA Astrophysics Data System (ADS)
Zykunov, V. A.
2009-09-01
A complete calculation of electroweak radiative corrections to observables of polarized Møller scattering at high energies was performed. This calculation took explicitly into account contributions caused by hard bremsstrahlung. A FORTRAN code that permitted including radiative corrections to high-energy Møller scattering under arbitrary electron-detection conditions was written. It was shown that the electroweak corrections caused by hard bremsstrahlung were rather strongly dependent on the choice of experimental cuts and changed substantially the polarization asymmetry in the region of high energies and over a broad interval of scattering angles.
Rescattering corrections and self-consistent metric in planckian scattering
NASA Astrophysics Data System (ADS)
Ciafaloni, M.; Colferai, D.
2014-10-01
Starting from the ACV approach to transplanckian scattering, we present a development of the reduced-action model in which the (improved) eikonal representation is able to describe particles' motion at large scattering angle and, furthermore, UV-safe (regular) rescattering solutions are found and incorporated in the metric. The resulting particles' shock-waves undergo calculable trajectory shifts and time delays during the scattering process — which turns out to be consistently described by both action and metric, up to relative order R 2 /b 2 in the gravitational radius over impact parameter expansion. Some suggestions about the role and the (re)scattering properties of irregular solutions — not fully investigated here — are also presented.
NASA Astrophysics Data System (ADS)
Cheng, Ju-Chieh Kevin; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-04-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies. This work was supported by the Canadian Institute of Health Research, a TRIUMF Life Science Grant, the Natural Sciences and Engineering Research Council of Canada UFA (V Sossi) and the Michael Smith Foundation for Health Research Scholarship (V Sossi).
NASA Technical Reports Server (NTRS)
Boughner, Robert E.
1986-01-01
A method for calculating the photodissociation rates needed for photochemical modeling of the stratosphere, which includes the effects of molecular scattering, is described. The procedure is based on Sokolov's method of averaging functional correction. The radiation model and approximations used to calculate the radiation field are examined. The approximated diffuse fields and photolysis rates are compared with exact data. It is observed that the approximate solutions differ from the exact result by 10 percent or less at altitudes above 15 km; the photolysis rates differ from the exact rates by less than 5 percent for altitudes above 10 km and all zenith angles, and by less than 1 percent for altitudes above 15 km.
Use of beam stoppers to correct random and scatter coincidence in PET: A Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chuang, Keh-Shih; Lu, Cheng-Chang; Ni, Yu-Ching; Jan, Meei-Ling
2013-05-01
3D acquisition of positron emission tomography (PET) produce data with improved signal-to-noise ratios compared with conventional 2D PET. However, the sensitivity increase is accompanied by an increase in the number of scattered photons and random coincidences detected. Scatter and random coincidence lead to a loss in image contrast and degrade the accuracy of quantitative analysis. This work examines the feasibility of using beam stoppers (BS) for correcting scatter and random coincidence simultaneously. The origins of the photons are not on the path of non-true event. Therefore, a BS placed on the line of response (LOR) that passes through the source position absorbs a particular fraction of the true events but has little effect on the scatter and random events. The subtraction of the two scanned data, with and without BS, can be employed to estimate the non-true events at the LOR. Monte Carlo (MC) simulations of 3D PET on an EEC phantom and a Zubal Phantom are conducted to validate the proposed approach. Both scattered and random coincidences can be estimated and corrected using the proposed method. The mean squared errors measured on the random+scatter sinogram of the phantom obtained by the proposed method are much less than those obtained using the conventional correction method (the delayed coincidence subtraction for random correction combined with single scatter simulation for scatter correction). Preliminary results indicate that the proposed method is feasible for clinical application.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope. PMID:22501044
Increasing the imaging depth through computational scattering correction (Conference Presentation)
NASA Astrophysics Data System (ADS)
Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis
2016-03-01
Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
NASA Astrophysics Data System (ADS)
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality. PMID:25615859
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-09-15
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo
2015-11-15
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the
NASA Astrophysics Data System (ADS)
Yue, Meghan L.; Boden, Adam E.; Sabol, John M.
2009-02-01
In addition to causing loss of contrast and blurring in an image, scatter also makes quantitative measurements of xray attenuation impossible. Many devices, methods, and models have been developed to eliminate, estimate, and correct for the effects of scatter. Although these techniques can reduce the impact of scatter in a large-area image, no methods have proven to be practical and sufficient to enable quantitative analysis of image data in a routine clinical setting. This paper describes a method of scatter correction which uses moderate x-ray collimation in combination with a correction algorithm operating on data obtained from large-area flat panel detector images. The method involves acquiring slot collimated images of the object, and utilizing information from outside of the collimated region, in addition to a priori data, to estimate the scatter within the collimated region. This method requires no increase dose to the patient while providing high image quality and accurate estimates of the primary x-ray data. This scatter correction technique was validated through beam stop experiments and comparison of theoretically calculated and measured contrast of thin aluminum and polymethylmethacrelate objects. Measurements taken with various background material thicknesses, both with and without a grid, showed that the slot-scatter corrected contrast and the theoretical contrast were not significantly different given a 99% confidence interval. However, the uncorrected contrast was found to be significantly different from the corrected and theoretical contrasts. These findings indicate that this method of scatter correction can eliminate the effect of scatter on contrast and potentially enable quantitative x-ray imaging.
NASA Astrophysics Data System (ADS)
Jarry, G.; Graham, S. A.; Jaffray, D. A.; Moseley, D. J.; Verhaegen, F.
2006-03-01
In this work Monte Carlo (MC) simulations are used to correct kilovoltage (kV) cone-beam computed tomographic (CBCT) projections for scatter radiation. All images were acquired using a kV CBCT bench-top system composed of an x-ray tube, a rotation stage and a flat-panel imager. The EGSnrc MC code was used to model the system. BEAMnrc was used to model the x-ray tube while a modified version of the DOSXYZnrc program was used to transport the particles through various phantoms and score phase space files with identified scattered and primary particles. An analytical program was used to read the phase space files and produce image files. The scatter correction was implemented by subtracting Monte Carlo predicted scatter distribution from measured projection images; these projection images were then reconstructed. Corrected reconstructions showed an important improvement in image quality. Several approaches to reduce the simulation time were tested. To reduce the number of simulated scatter projections, the effect of varying the projection angle on the scatter distribution was evaluated for different geometries. It was found that the scatter distribution does not vary significantly over a 30-degree interval for the geometries tested. It was also established that increasing the size of the voxels in the voxelized phantom does not affect the scatter distribution but reduces the simulation time. Different techniques to smooth the scatter distribution were also investigated.
Scatter correction in scintillation camera imaging of positron-emitting radionuclides
Ljungberg, M.; Danfelter, M.; Strand, S.E.
1996-12-31
The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.
NASA Astrophysics Data System (ADS)
Kim, Y.; Kim, H.; Park, H.; Choi, J.; Choi, Y.
2014-03-01
Digital breast tomosynthesis (DBT) is a technique developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, the x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach for scatter reduction technique is a beam-stop-array (BSA) algorithm while this method has a concern of additional exposure to acquire the scatter distribution. The compressed breast is roughly symmetry and the scatter profiles from projection acquired at axially opposite angle are similar to mirror image from each other. The purpose of this study was to apply the BSA algorithm acquiring only two scans with a beam stop array, which estimates scatter distribution with minimum additional exposure. The results of scatter correction with angular interpolation were comparable to those of scatter correction with all scatter distributions at each angle and exposure increase was less than 13%. This study demonstrated the influence of scatter correction by BSA algorithm with minimum exposure which indicates the practical application in clinical situations.
Constrained γZ correction to parity-violating electron scattering
Hall, N. L.; Thomas, A. W.; Young, R. D.; Blunden, P. G.; Melnitchouk, W.
2013-11-07
We update the calculation of γZ interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Constrained {gamma}Z correction to parity-violating electron scattering
Hall, Nathan Luk; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-11-01
We update the calculation of {gamma}Z interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Gerasimov, R. E. Fadin, V. S.
2015-01-15
An analysis of approximations used in calculations of radiative corrections to electron-proton scattering cross section is presented. We investigate the difference between the relatively recent Maximon and Tjon result and the Mo and Tsai result, which was used in the analysis of experimental data. We also discuss the proton form factors ratio dependence on the way we take into account radiative corrections.
Schoen, K.; Snow, W. M.; Kaiser, H.; Werner, S. A.
2005-01-01
The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10−3 fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment. PMID:27308132
NASA Astrophysics Data System (ADS)
Tomalak, O.; Vanderhaeghen, M.
2016-01-01
We evaluate the two-photon exchange (TPE) correction to the unpolarized elastic electron-proton scattering at small momentum transfer Q2 . We account for the inelastic intermediate states approximating the double virtual Compton scattering by the unpolarized forward virtual Compton scattering. The unpolarized proton structure functions are used as input for the numerical evaluation of the inelastic contribution. Our calculation reproduces the leading terms in the Q2 expansion of the TPE correction and goes beyond this approximation by keeping the full Q2 dependence of the proton structure functions. In the range of small momentum transfer, our result is in good agreement with the empirical TPE fit to existing data.
Ouyang, L; Yan, H; Jia, X; Jiang, S; Wang, J; Zhang, H
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated by Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S.; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT.
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e^{–} e^{–} → e^{–} e^{–} (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e– e– → e– e– (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Evaluation of low contrast detectability after scatter correction in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Michielsen, Koen; Fieselmann, Andreas; Cockmartin, Lesley; Nuyts, Johan
2014-03-01
Projection images from digital breast tomosynthesis acquisitions can contain a large fraction of scattered x-rays due to the absence of an anti-scatter grid in front of the detector. In order to produce quantitative results, this should be accounted for in reconstruction algorithms. We examine the possible improvement in signal difference to noise ratio (SDNR) for low contrast spherical densities when applying a scatter correction algorithm. Hybrid patient data were created by combining real patient data with attenuation profiles of spherical masses acquired with matching exposure settings. Scatter in these cases was estimated using Monte-Carlo based scatter- ing kernels. All cases were reconstructed using filtered backprojection (FBP) with and without beam hardening correction and two maximum likelihood methods for transmission tomography, with and without quadratic smoothing prior (MAPTR and MLTR). For all methods, images were reconstructed without scatter correction, and with scatter precorrection, and for the iterative methods also with an adjusted update step obtained by including scatter in the physics model. SDNR of the inserted spheres was calculated by subtracting the recon- structions with and without inserted template to measure the signal difference, while noise was measured in the image containing the template. SDNR was significantly improved by 3.5% to 4.5% (p < 0.0001) at iteration 10 for both correction methods applied to the MLTR and MAPTR reconstructions. For MLTR these differences disappeared by iteration 100. For regular FBP SDNR remained the same after correction (p = 0.60) while it dropped slightly for FBP with beam hardening correction (-1.4%, p = 0.028). These results indicate that for the iterative methods, application of a scatter correction algorithm has very little effect on the SDNR, it only causes a slight decrease in convergence speed, which is similar for precorrection and correction incorporated in the update step. The FBP results
Library-based scatter correction for dedicated cone beam breast CT: a feasibility study
NASA Astrophysics Data System (ADS)
Shi, Linxi; Vedantham, Srinivasan; Karellas, Andrew; Zhu, Lei
2016-04-01
Purpose: Scatter errors are detrimental to cone-beam breast CT (CBBCT) accuracy and obscure the visibility of calcifications and soft-tissue lesions. In this work, we propose practical yet effective scatter correction for CBBCT using a library-based method and investigate its feasibility via small-group patient studies. Method: Based on a simplified breast model with varying breast sizes, we generate a scatter library using Monte-Carlo (MC) simulation. Breasts are approximated as semi-ellipsoids with homogeneous glandular/adipose tissue mixture. On each patient CBBCT projection dataset, an initial estimate of scatter distribution is selected from the pre-computed scatter library by measuring the corresponding breast size on raw projections and the glandular fraction on a first-pass CBBCT reconstruction. Then the selected scatter distribution is modified by estimating the spatial translation of the breast between MC simulation and the clinical scan. Scatter correction is finally performed by subtracting the estimated scatter from raw projections. Results: On two sets of clinical patient CBBCT data with different breast sizes, the proposed method effectively reduces cupping artifact and improves the image contrast by an average factor of 2, with an efficient processing time of 200ms per conebeam projection. Conclusion: Compared with existing scatter correction approaches on CBBCT, the proposed library-based method is clinically advantageous in that it requires no additional scans or hardware modifications. As the MC simulations are pre-computed, our method achieves a high computational efficiency on each patient dataset. The library-based method has shown great promise as a practical tool for effective scatter correction on clinical CBBCT.
Measurement-based scatter correction for cone-beam CT in radiation therapy
NASA Astrophysics Data System (ADS)
Zhu, Lei; Xing, Lei
2009-02-01
Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy. However, as compared to conventional CT, the degraded image quality of CBCT hampers its applications in radiation therapy. Due to the large volume of x-ray illumination, scatter is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in the literature, while drawbacks still exist. In this work, we propose a correction algorithm which is particularly useful in radiation therapy. Since the same patient is scanned repetitively during one radiation treatment course, we measure the scatter distribution in one scan, and use the measured scatter distribution to estimate and correct scatter in the following scans. A partially blocked CBCT is used in the scatter measurement scan. The x-ray beam blocker has a strip pattern, such that the whole-field scatter distribution can be estimated from the detected signals in the shadow region and the patient rigid transformation can be determined from the reconstructed image using the illuminated detector projection data. From the derived patient transformation, the measured scatter is then modified accordingly and used for scatter correction in the following regular CBCT scans. The proposed method has been evaluated using Monte Carlo simulations and physical experiments on an anthropomorphic chest phantom. The results show a significant suppression of scatter artifacts using the proposed method. Using the reconstruction in a narrow collimator geometry as a reference, the comparison also shows that the proposed method reduces reconstruction error from 13.2% to 3.8%. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy.
Ruehrnschopf, Ernst-Peter; Klingenbeck, Klaus
2011-07-15
Since scattered radiation in cone-beam volume CT implies severe degradation of CT images by quantification errors, artifacts, and noise increase, scatter suppression is one of the main issues related to image quality in CBCT imaging. The aim of this review is to structurize the variety of scatter suppression methods, to analyze the common structure, and to develop a general framework for scatter correction procedures. In general, scatter suppression combines hardware techniques of scatter rejection and software methods of scatter correction. The authors emphasize that scatter correction procedures consist of the main components scatter estimation (by measurement or mathematical modeling) and scatter compensation (deterministic or statistical methods). The framework comprises most scatter correction approaches and its validity also goes beyond transmission CT. Before the advent of cone-beam CT, a lot of papers on scatter correction approaches in x-ray radiography, mammography, emission tomography, and in Megavolt CT had been published. The opportunity to avail from research in those other fields of medical imaging has not yet been sufficiently exploited. Therefore additional references are included when ever it seems pertinent. Scatter estimation and scatter compensation are typically intertwined in iterative procedures. It makes sense to recognize iterative approaches in the light of the concept of self-consistency. The importance of incorporating scatter compensation approaches into a statistical framework for noise minimization has to be underscored. Signal and noise propagation analysis is presented. A main result is the preservation of differential-signal-to-noise-ratio (dSNR) in CT projection data by ideal scatter correction. The objective of scatter compensation methods is the restoration of quantitative accuracy and a balance between low-contrast restoration and noise reduction. In a synopsis section, the different deterministic and statistical methods are
ERIC Educational Resources Information Center
Sheen, Younghee; Wright, David; Moldawa, Anna
2009-01-01
Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…
Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4
NASA Astrophysics Data System (ADS)
Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.
2005-10-01
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.
Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.
Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D
2005-10-21
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance. PMID:16204875
Coherent scattering and matrix correction in bone-lead measurements
NASA Astrophysics Data System (ADS)
Todd, A. C.
2000-07-01
The technique of K-shell x-ray fluorescence of lead in bone has been used in many studies of the health effects of lead. This paper addresses one aspect of the technique, namely the coherent conversion factor (CCF) which converts between the matrix of the calibration standards and those of human bone. The CCF is conventionally considered a constant but is a function of scattering angle, energy and the elemental composition of the matrices. The aims of this study were to quantify the effect on the CCF of several assumptions which may not have been tested adequately and to compare the CCFs for plaster of Paris (the present matrix of calibration standards) and a synthetic apatite matrix. The CCF was calculated, using relativistic form factors, for published compositions of bone, both assumed and assessed compositions of plaster, and the synthetic apatite. The main findings of the study were, first, that impurities in plaster, lead in the plaster or bone matrices, coherent scatter from non-bone tissues and the individual subject's measurement geometry are all minor or negligible effects; and, second, that the synthetic apatite matrix is more representative of bone mineral than is plaster of Paris.
Nilsson, Annica M.; Jonsson, Andreas; Jonsson, Jacob C.; Roos, Arne
2011-03-01
For most integrating sphere measurements, the difference in light distribution between a specular reference beam and a diffused sample beam can result in significant errors. The problem becomes especially pronounced in integrating spheres that include a port for reflectance or diffuse transmittance measurements. The port is included in many standard spectrophotometers to facilitate a multipurpose instrument, however, absorption around the port edge can result in a detected signal that is too low. The absorption effect is especially apparent for low-angle scattering samples, because a significant portion of the light is scattered directly onto that edge. In this paper, a method for more accurate transmittance measurements of low-angle light-scattering samples is presented. The method uses a standard integrating sphere spectrophotometer, and the problem with increased absorption around the port edge is addressed by introducing a diffuser between the sample and the integrating sphere during both reference and sample scan. This reduces the discrepancy between the two scans and spreads the scattered light over a greater portion of the sphere wall. The problem with multiple reflections between the sample and diffuser is successfully addressed using a correction factor. The method is tested for two patterned glass samples with low-angle scattering and in both cases the transmittance accuracy is significantly improved.
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-12-15
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-01-01
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%–66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%–29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%–62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
Park, Yang-Kyun; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.
2015-01-01
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCTus) and a priori CT-based scatter correction (CBCTap). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCTus, while no HU change was applied to the CBCTap. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CTref) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCTap was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCTus images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CTref, while the CBCTap images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCTap-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms. PMID:26233175
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were
NASA Astrophysics Data System (ADS)
Mahesh, C.; Prakash, Satya; Sathiyamoorthy, V.; Gairola, R. M.
2011-11-01
An Artificial Neural Network (ANN) based technique is proposed for estimating precipitation over Indian land and oceanic regions [30° S - 40° N and 30° E - 120° E] using Scattering Index (SI) and Polarization Corrected Temperature (PCT) derived from Special Sensor Microwave Imager (SSM/I) measurements. This rainfall retrieval algorithm is designed to estimate rainfall using a combination of SSM/I and Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements. For training the ANN, SI and PCT (which signify rain signatures in a better way) calculated from SSM/I brightness temperature are considered as inputs and Precipitation Radar (PR) rain rate as output. SI is computed using 19.35 GHz, 22.235 GHz and 85.5 GHz Vertical channels and PCT is computed using 85.5 GHz Vertical and Horizontal channels. Once the training is completed, the independent data sets (which were not included in the training) were used to test the performance of the network. Instantaneous precipitation estimates with independent test data sets are validated with PR surface rain rate measurements. The results are compared with precipitation estimated using power law based (i) global algorithm and (ii) regional algorithm. Overall results show that ANN based present algorithm shows better agreement with PR rain rate. This study is aimed at developing a more accurate operational rainfall retrieval algorithm for Indo-French Megha-Tropiques Microwave Analysis and Detection of Rain and Atmospheric Structures (MADRAS) radiometer.
NASA Astrophysics Data System (ADS)
Yao, B. A.; Zhang, C. S.; Sheng, C. J.; Peng, Y. L.
2005-07-01
This paper is the continuation of paper [1]. In this paper we further show that the difference between twilight flat field and night sky exposure is mainly due to the existence of scattered light. Like Grundahl and Sorensen, we also made the pinhole images of the 1.56m at the Shanghai Observatory and the 63cm of the Nanjing University to show the existence of scattered light intuitively. Both the 1.56m and the 63cm reflectors have normal designed baffles. Therefore it is the common weakness of all standard designed reflectors having only two baffles mounted in front of the primary and secondary mirrors which are not enough to protect the CCD cameras from scattered light in getting accurate flat fields. It is of great importance to modify the primary mirror baffle of all similar reflectors in order to get more accurate flat fielding.
NASA Astrophysics Data System (ADS)
Bačić, Z.; Kress, J. D.; Parker, G. A.; Pack, R. T.
1990-02-01
Accurate 3D coupled channel calculations for total angular momentum J=0 for the reaction F+H2→HF+H using a realistic potential energy surface are analyzed. The reactive scattering is formulated using the hyperspherical (APH) coordinates of Pack and Parker. The adiabatic basis functions are generated quite efficiently using the discrete variable representation method. Reaction probabilities for relative collision energies of up to 17.4 kcal/mol are presented. To aid in the interpretation of the resonances and quantum structure observed in the calculated reaction probabilities, we analyze the phases of the S matrix transition elements, Argand diagrams, time delays and eigenlifetimes of the collision lifetime matrix. Collinear (1D) and reduced dimensional 3D bending corrected rotating linear model (BCRLM) calculations are presented and compared with the accurate 3D calculations.
NASA Astrophysics Data System (ADS)
Camp, Charles H., Jr.; Lee, Young Jong; Cicerone, Marcus T.
2016-04-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download.
[Correction Method of Atmospheric Scattering Effect Based on Three Spectrum Bands].
Ye, Han-han; Wang, Xian-hua; Jiang, Xin-hua; Bu, Ting-ting
2016-03-01
As a major error of CO2 retrieval, atmospheric scattering effect hampers the application of satellite products. Effect of aerosol and combined effect of aerosol and ground surface are important source of atmospheric scattering, so it needs comprehensive consideration of scattering effect from aerosol and ground surface. Based on the continuum, strong and weak absorption part of three spectrum bands O2-A, CO2 1.6 μm and 2.06 μm, information of aerosol and albedo was analyzed, and improved full physics retrieval method was proposed, which can retrieve aerosol and albedo simultaneously to correct the scattering effect. Simulation study on CO2 error caused by aerosol and ground surface albedo CO2 error by correction method was carried out. CO2 error caused by aerosol optical depth and ground surface albedo can reach up to 8%, and CO2 error caused by different types of aerosol can reach up to 10%, while these two types of error can be controlled within 1% and 2% separately by this correction method, which shows that the method can correct the scattering effect effectively. Through evaluation of the results, the potential of this method for high precision satellite data retrieval is obvious, meanwhile, some problems which need to be noticed in real application were also pointed out. PMID:27400493
Large corrections to high-pT hadron-hadron scattering in QCD
Ellis, R. K.; Furman, M. A.; Haber, H. E.; Hinchliffe, I.
1980-10-01
We have eomputed the first non-trivial QCD corrections to the quark-quark scattering process which contributes to the production of hadrons at large p{sub T} in hadron-hadron collisions. Using quark distribution functions defined in deep inelastic scattering and fragmentation functions defined in one particle inclusive e{sup +}e{sup -} annihilation, we find that the corrections are large. This implies that QCD perturbation theory may not be reliable for large p{sub T} hadron physics.
Methods for correcting microwave scattering and emission measurements for atmospheric effects
NASA Technical Reports Server (NTRS)
Komen, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Algorithms were developed to permit correction of scattering coefficient and brightness temperature for the Skylab S193 Radscat for the effects of cloud attenuation. These algorithms depend upon a measurement of the vertically polarized excess brightness temperature at 50 deg incidence angle. This excess temperature is converted to an equivalent 50 deg attenuation, which may then be used to estimate the horizontally polarized excess brightness temperature and reduced scattering coefficient at 50 deg. For angles other than 50 deg, the correction also requires use of the variation of emissivity with salinity and water temperature.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the
Coulomb corrections to the parameters of the Molière multiple scattering theory
NASA Astrophysics Data System (ADS)
Kuraev, Eduard; Voskresenskaya, Olga; Tarasov, Alexander
2014-06-01
High-energy Coulomb corrections to the parameters of the Molière multiple scattering theory are obtained. Numerical calculations are presented in the range of the nuclear charge number of the target atom 6⪕Z⪕92. It is shown that these corrections have a large value for sufficiently heavy elements of the target material and should be taken into account in describing high-energy experiments with nuclear targets.
Ma, C.; Liescheski, P.B.; Bonham, R.A. )
1989-12-01
In this article we describe an experimental technique to measure the total electron-impact cross section by measurement of the attenuation of an electron beam passing through a gas at constant pressure with the unwanted forward scattering contribution removed. The technique is based on the different spatial propagation properties of scattered and unscattered electrons. The correction is accomplished by measuring the electron beam attenuation dependence on both the target gas pressure (number density) and transmission length. Two extended forms of the Beer--Lambert law which approximately include the contributions for forward scattering and for forward scattering plus multiple scattering from the gas outside the electron beam were developed. It is argued that the dependence of the forward scattering on the path length through the gas is approximately independent of the model used to describe it. The proposed methods were used to determine the total cross section and forward scattering contribution from argon (Ar) with 300-eV electrons. Our results are compared with those in the literature and the predictions of theory and experiment for the forward scattering and multiple scattering contributions. In addition, Monte Carlo simulations were performed as a further test of the method.
Radiative Corrections for Lepton-Proton Scattering: When the Mass Matters
NASA Astrophysics Data System (ADS)
Afanasev, Andrei
2015-04-01
Radiative corrections procedures for electron-proton and muon-proton scattering are well established under the assumption that the leptons are considered in an ultra-relativistic approximation. MUSE experiment at PSI and COMPASS experiment at CERN entered the regions of kinematics where explicit dependence of radiative corrections on the lepton mass becomes important. MUSE will consider the scattering of muons with momenta of the order 100 MeV/c, therefore lepton mass corrections become important for the entire kinematic domain. COMPASS experiment uses scattering of 100 GeV/c muons, and the muon mass effects are especially relevant in the quasi-real photo production limit, Q2 --> 0. A dedicated Monte Carlo generator of radiative events is being developed for MUSE, which also includes effects of interference between the lepton and proton bremsstrahlung. Parts of the radiative corrections are expected to be suppressed for muons due to the larger muon mass. Two-photon exchange corrections are generally expected to be small, and should be similar for electrons and muons. We classify the radiative corrections into two categories, C-even and C-odd under the lepton charge reversal, and discuss their role separately for the above experiments.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533
Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan
2015-03-10
In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
Improving quantitative dosimetry in 177Lu-DOTATATE SPECT by energy window-based scatter corrections
Lagerburg, Vera; Klausen, Thomas L.; Holm, Søren
2014-01-01
Purpose Patient-specific dosimetry of lutetium-177 (177Lu)-DOTATATE treatment in neuroendocrine tumours is important, because uptake differs across patients. Single photon emission computer tomography (SPECT)-based dosimetry requires a conversion factor between the obtained counts and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. Materials and methods 177Lu SPECT images of a phantom with known activity concentration ratio between the uniform background and filled hollow spheres were acquired for three different collimators: low-energy high resolution (LEHR), low-energy general purpose (LEGP) and medium-energy general purpose (MEGP). Counts were collected in several energy windows, and scatter correction was performed by applying different methods such as effective scatter source estimation (ESSE), triple-energy and dual-energy window, double-photopeak window and downscatter correction. The intensity ratio between the spheres and the background was measured and corrected for the partial volume effect and used to compare the performance of the methods. Results Low-energy collimators combined with 208 keV energy windows give rise to artefacts. For the 113 keV energy window, large differences were observed in the ratios for the spheres. For MEGP collimators with the ESSE correction technique, the measured ratio was close to the real ratio, and the differences between spheres were small. Conclusion For quantitative 177Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated and the real ratio is less than 10% for both energy windows. PMID:24525900
NASA Astrophysics Data System (ADS)
Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick
2002-08-01
This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.
QCD CORRECTIONS TO DILEPTON PRODUCTION NEAR PARTONIC THRESHOLD IN PP SCATTERING.
SHIMIZU, H.; STERMAN, G.; VOGELSANG, W.; YOKOYA, H.
2005-10-02
We present a recent study of the QCD corrections to dilepton production near partonic threshold in transversely polarized {bar p}p scattering, We analyze the role of the higher-order perturbative QCD corrections in terms of the available fixed-order contributions as well as of all-order soft-gluon resummations for the kinematical regime of proposed experiments at GSI-FAIR. We find that perturbative corrections are large for both unpolarized and polarized cross sections, but that the spin asymmetries are stable. The role of the far infrared region of the momentum integral in the resummed exponent and the effect of the NNLL resummation are briefly discussed.
Igor Akushevich; Andrei Afanasev; Mykola Merenkov
2001-12-01
The explicit formulae for radiative correction (RC) calculation for elastic ep-scattering is presented. Two typical measurements of polarization observables such as beam-target asymmetry or recoil proton polarization, are considered. Possibilities to take into account realistic experimental acceptances are discussed. The Fortran code MASCARAD for providing the RC procedure is presented. Numerical analysis is done for kinematical conditions of TJNAF.
NASA Astrophysics Data System (ADS)
Afanasev, A.; Akushevich, I.; Merenkov, N.
2001-12-01
Explicit formulas for radiative correction (RC) calculations for elastic ep scattering are presented. Two typical measurements of polarization observables, such as beam-target asymmetry or recoil proton polarization, are considered. The possibilities of taking into account realistic experimental acceptances are discussed. The FORTRAN code MASCARAD for providing the RC procedure is presented. A numerical analysis is done for the kinematical conditions of CEBAF.
QED Radiative Corrections to Asymmetries of Elastic ep-scattering in Hadronic Variables
Alexander Ilyichev; Andrei Afanasev; Igor Akushevich; Mykola Merenkov
2001-08-16
Compact analytical formulae for QED radiative corrections in the processes of elastic e-p scattering are obtained in the case when kinematic variables are reconstructed from the recoil proton momentum measured. Numerical analysis is presented under kinematic conditions of current experiments at JLab.
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
NASA Astrophysics Data System (ADS)
Rosenthal, Yair; Lohmann, George P.
2002-09-01
Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
NASA Astrophysics Data System (ADS)
Takeuchi, Wataru
2013-10-01
Since in impact-collision ion scattering spectroscopy (ICISS) data analysis the interaction potential represented by the screening length as the screening effect is not satisfactorily established up to the present, we introduce commonly the correction factor in the screening length. Previously, Yamamura, Takeuchi and Kawamura (YTK) have suggested the theory taking the shell effect of electron distributions into account for the correction factor to Firsov screening length in the Moliere potential. The application of YTK theory to the evaluation of screening length corrections for the interaction potentials in ICISS manifested that the screening length corrections calculated by the YTK theory agree almost with those determined by simulations or numerical calculations in ICISS and its variants data analyses, being superior to the evaluation of screening length corrections with the O'Connor and Biersack (OB) formula.
Correction for multiple scattering of unpolarized photons in attenuation coefficient measurements
Fernandez, J.E.; Sumini, M.; Satori, R.
1995-01-01
Calculations of the diffusion of unpolarized photons in thin thickness targets have been performed with recourse to a vector transport model taking rigorously into account the polarization introduced by the scattering interactions. An order-of-interactions solution of the Boltzmann transport equation for photons was used to describe the multiple scattering terms due to the prevailing effects in the X-ray regime. An analytical expression for the correction factor to the attenuation coefficient is given in term of the solid angle subtended by the detector and the energy interval characterizing the detection response. Although the main corrections are due to the influence of the pure Rayleigh effect, first- and second-order chains involving the Rayleigh and Compton effects have been considered as possible sources of overlapping contributions to the transmitted intensity. The extent of the corrections is estimated and some examples are given for pure element targets.
Constrained gamma-Z interference corrections to parity-violating electron scattering
Hall, Nathan Luke; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-07-01
We present a comprehensive analysis of gamma-Z interference corrections to the weak charge of the proton measured in parity-violating electron scattering, including a survey of existing models and a critical analysis of their uncertainties. Constraints from parton distributions in the deep-inelastic region, together with new data on parity-violating electron scattering in the resonance region, result in significantly smaller uncertainties on the corrections compared to previous estimates. At the kinematics of the Qweak experiment, we determine the gamma-Z box correction to be Re\\box_{gamma-Z}^V = (5.61 +- 0.36) x 10^{-3}. The new constraints also allow precise predictions to be made for parity-violating deep-inelastic asymmetries on the deuteron.
Correcting errors in the optical path difference in Fourier spectroscopy: a new accurate method.
Kauppinen, J; Kärkköinen, T; Kyrö, E
1978-05-15
A new computational method for calculating and correcting the errors of the optical path difference in Fourier spectrometers is presented. This method only requires an one-sided interferogram and a single well-separated line in the spectrum. The method also cancels out the linear phase error. The practical theory of the method is included, and an example of the progress of the method is illustrated by simulations. The method is also verified by several simulations in order to estimate its usefulness and accuracy. An example of the use of this method in practice is also given. PMID:20198027
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
Accurate measurement of the x-ray coherent scattering form factors of tissues
NASA Astrophysics Data System (ADS)
King, Brian W.
The material dependent x-ray scattering properties of tissues are determined by their scattering form factors, measured as a function of the momentum transfer argument, x. Incoherent scattering form factors, Finc, are calculable for all values of x while coherent scattering form factors, Fcoh, cannot be calculated except at large C because of their dependence on long range order. As a result, measuring Fcoh is very important to the developing field of x-ray scatter imaging. Previous measurements of Fcoh, based on crystallographic techniques, have shown significant variability, as these methods are not optimal for amorphous materials. Two methods of measuring F coh, designed with amorphous materials in mind, are developed in this thesis. An angle-dispersive technique is developed that uses a polychromatic x-ray beam and a large area, energy-insensitive detector. It is shown that Fcoh can be measured in this system if the incident x-ray spectrum is known. The problem is ill-conditioned for typical x-ray spectra and two numerical methods of dealing with the poor conditioning are explored. It is shown that these techniques work best with K-edge filters to limit the spectral width and that the accuracy degrades for strongly ordered materials. Measurements of width Fcoh for water samples are made using 50, 70 and 92 kVp spectra. The average absolute relative difference in Fcoh between our results and the literature for water is approximately 10-15%. Similar measurements for fat samples were made and found to be qualitatively similar to results in the literature, although there is very large variation between the literature values in this case. The angle-dispersive measurement is limited to low resolution measurements of the coherent scattering form factor although it is more accessible than traditional measurements because of the relatively commonplace equipment requirements. An energy-dispersive technique is also developed that uses a polychromatic x-ray beam and an
A Monte Carlo correction for the effect of Compton scattering in 3-D PET brain imaging
Levin, C.S.; Dahlbom, M.; Hoffman, E.J.
1995-08-01
A Monte Carlo simulation has been developed to simulate and correct for the effect of Compton scatter in 3-D acquired PET brain scans. The method utilizes the 3-D reconstructed image volume as the source intensity distribution for a photon-tracking Monte Carlo simulation. It is assumed that the number of events in each pixel of the image represents the isotope concentration at that location in the brain. The history of each annihilation photon`s interactions in the scattering medium is followed, and the sinograms for the scattered and unscattered photon pairs are generated in a simulated 3-D PET acquisition. The calculated scatter contribution is used to correct the original data set. The method is general and can be applied to any scanner configuration or geometry. In its current form the simulation requires 25 hours on a single Sparc 10 CPU when every pixel in a 15-plane, 128 x 128 pixel image volume is sampled, and less than 2 hours when 16 pixels (4 x 4) are grouped as a single pixel. Results of the correction applied to 3-D human and phantom studies are presented.
Two-photon exchange corrections in elastic lepton-proton scattering at small momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
In recent years, elastic electron-proton scattering experiments, with and without polarized protons, gave strikingly different results for the electric over magnetic proton form factor ratio. A mysterious discrepancy (``the proton radius puzzle'') has been observed in the measurement of the proton charge radius in muon spectroscopy experiments versus electron spectroscopy and electron scattering. Two-photon exchange (TPE) contributions are the largest source of the hadronic uncertainty in these experiments. We compare the existing models of the elastic contribution to TPE correction in lepton-proton scattering. A subtracted dispersion relation formalism for the TPE in electron-proton scattering has been developed and tested. Its relative effect on cross section is in the 1 - 2 % range for a low value of the momentum transfer. An alternative dispersive evaluation of the TPE correction to the hydrogen hyperfine splitting was found and applied. For the inelastic TPE contribution, the low momentum transfer expansion was studied. In addition with the elastic TPE it describes the experimental TPE fit to electron data quite well. For a forthcoming muon-proton scattering experiment (MUSE) the resulting TPE was found to be in the 0 . 5 - 1 % range, which is the planned accuracy goal.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Chavez, P.S., Jr.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
A simple scatter correction method for dual energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Lau, Beverly; Hu, Yue-Houng; Zhao, Wei; Gindi, Gene
2014-03-01
Dual-Energy Contrast Enhanced Digital Breast Tomosynthesis (DE-CE-DBT) has the potential to deliver diagnostic information for vascularized breast pathology beyond that available from screening DBT. DE-CE-DBT involves a contrast (iodine) injection followed by a low energy (LE) and a high energy (HE) acquisitions. These undergo weighted subtraction then a reconstruction that ideally shows only the iodinated signal. Scatter in the projection data leads to "cupping" artifacts that can reduce the visibility and quantitative accuracy of the iodinated signal. The use of filtered backprojection (FBP) reconstruction ameliorates these types of artifacts, but the use of FBP precludes the advantages of iterative reconstructions. This motivates an effective and clinically practical scatter correction (SC) method for the projection data. We propose a simple SC method, applied at each acquisition angle. It uses scatter-only data at the edge of the image to interpolate a scatter estimate within the breast region. The interpolation has an approximately correct spatial profile but is quantitatively inaccurate. We further correct the interpolated scatter data with the aid of easily obtainable knowledge of SPR (scatter-to-primary ratio) at a single reference point. We validated the SC method using a CIRS breast phantom with iodine inserts. We evaluated its efficacy in terms of SDNR and iodine quantitative accuracy. We also applied our SC method to a patient DE-CE-DBT study and showed that the SC allowed detection of a previously confirmed tumor at the edge of the breast. The SC method is quick to use and may be useful in a clinical setting.
Germer, Thomas A
2016-09-01
We consider the effect of volume diffusion on measurements of the bidirectional scattering distribution function when a finite distance is used for the solid angle defining aperture. We derive expressions for correction factors that can be used when the reduced scattering coefficients and the index of refraction are known. When these quantities are not known, the expressions can be used to guide the assessment of measurement uncertainty. We find that some measurement geometries reduce the effect of volume diffusion compared to their reciprocal geometries. PMID:27607273
Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng
2014-02-15
We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70 dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π. PMID:24562233
First Order QED Corrections to the Parity-Violating Asymmetry in Moller Scattering
Zykunov, Vladimir A.; Suarez, Juan; Tweedie, Brock A.; Kolomensky, Yury G.; /UC, Berkeley
2005-08-15
We compute a full set of the first order QED corrections to the parity-violating observables in polarized Moeller scattering. We employ a covariant method of removing infrared divergences, computing corrections without introducing any unphysical parameters. When applied to the kinematics of the SLAC E158 experiment, the QED corrections reduce the parity violating asymmetry by 4.5%. We combine our results with the previous calculations of the first-order electroweak corrections and obtain the complete {Omicron}({alpha}) prescription for relating the experimental asymmetry A{sub LR} to the low-energy value of the weak mixing angle sin{sup 2} {theta}{sub W}. Our results are applicable to the recent measurement of A{sub LR} by the SLAC E158 collaboration, as well as to the future parity violation experiments.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Weak charge of the proton: loop corrections to parity-violating electron scattering
Wally Melnitchouk
2011-05-01
I review the role of two-boson exchange corrections to parity-violating elastic electron–proton scattering. Direct calculations of contributions from nucleon and Delta intermediate states show generally small, [script O](1–2%), effects over the range of kinematics relevant for proton strangeness form factor measurements. For the forward angle Qweak experiment at Jefferson Lab, which aims to measure the weak charge of the proton, corrections from the gammaZ box diagram are computed within a dispersive approach and found to be sizable at the E~1 GeV energy scale of the experiment.
Lowest order QED radiative corrections to longitudinally polarized Møller scattering
NASA Astrophysics Data System (ADS)
Ilyichev, A.; Zykunov, V.
2005-08-01
The total lowest-order electromagnetic radiative corrections to the observables in Møller scattering of longitudinally polarized electrons have been calculated. The final expressions obtained by the covariant method for the infrared divergency cancellation are free from any unphysical cut-off parameters. Since the calculation is carried out within the ultrarelativistic approximation our result has a compact form that is convenient for computing. Basing on these expressions the FORTRAN code MERA has been developed. Using this code the detailed numerical analysis performed under SLAC (E-158) and JLab kinematic conditions has shown that the radiative corrections are significant and rather sensitive to the value of the missing mass (inelasticity) cuts.
Andrei Afanasev; Igor Akushevich; Nikolai Merenkov
2004-03-01
The electron structure function method is applied to calculate model-independent radiative corrections to an asymmetry of electron-proton scattering. The representations for both spin-independent and spin-dependent parts of the cross-section are derived. Master formulae take into account the leading corrections in all orders and the main contribution of the second order next-to-leading ones and have accuracy at the level of one per mille. Numerical calculations illustrate our analytical results for both elastic and deep inelastic events.
QED radiative corrections to low-energy Møller and Bhabha scattering
NASA Astrophysics Data System (ADS)
Epstein, Charles S.; Milner, Richard G.
2016-08-01
We present a treatment of the next-to-leading-order radiative corrections to unpolarized Møller and Bhabha scattering without resorting to ultrarelativistic approximations. We extend existing soft-photon radiative corrections with new hard-photon bremsstrahlung calculations so that the effect of photon emission is taken into account for any photon energy. This formulation is intended for application in the OLYMPUS experiment and the upcoming DarkLight experiment but is applicable to a broad range of experiments at energies where QED is a sufficient description.
NASA Technical Reports Server (NTRS)
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
Development of Filtered Rayleigh Scattering for Accurate Measurement of Gas Velocity
NASA Technical Reports Server (NTRS)
Miles, Richard B.; Lempert, Walter R.
1995-01-01
The overall goals of this research were to develop new diagnostic tools capable of capturing unsteady and/or time-evolving, high-speed flow phenomena. The program centers around the development of Filtered Rayleigh Scattering (FRS) for velocity, temperature, and density measurement, and the construction of narrow linewidth laser sources which will be capable of producing an order MHz repetition rate 'burst' of high power pulses.
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Dzuba, V. A.; Flambaum, V. V.; Pospelov, M.; Stadnik, Y. V.
2016-06-01
We revisit the WIMP-type dark matter scattering on electrons that results in atomic ionization and can manifest itself in a variety of existing direct-detection experiments. Unlike the WIMP-nucleon scattering, where current experiments probe typical interaction strengths much smaller than the Fermi constant, the scattering on electrons requires a much stronger interaction to be detectable, which in turn requires new light force carriers. We account for such new forces explicitly, by introducing a mediator particle with scalar or vector couplings to dark matter and to electrons. We then perform state-of-the-art numerical calculations of atomic ionization relevant to the existing experiments. Our goals are to consistently take into account the atomic physics aspect of the problem (e.g., the relativistic effects, which can be quite significant) and to scan the parameter space—the dark matter mass, the mediator mass, and the effective coupling strength—to see if there is any part of the parameter space that could potentially explain the DAMA modulation signal. While we find that the modulation fraction of all events with energy deposition above 2 keV in NaI can be quite significant, reaching ˜50 %, the relevant parts of the parameter space are excluded by the XENON10 and XENON100 experiments.
Fujita, Masahiro; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Zoghbi, Sami S; Seneca, Nicholas; Tipre, Dnyanesh; Seibyl, John P; Innis, Robert B; Iida, Hidehiro
2004-05-01
Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D(2) receptors using [(123)I]epidepride. Eight healthy human subjects [age 30+/-8 (range 22-46) years] participated in a study with a bolus injection of 373+/-12 (354-389) MBq [(123)I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry micro (SC) and without scatter correction using broad-beam micro (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [(18)F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. PMID:14730406
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Valenti, M.
2009-12-01
Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the
Two-photon exchange correction to muon-proton elastic scattering at low momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
We evaluate the two-photon exchange (TPE) correction to the muon-proton elastic scattering at small momentum transfer. Besides the elastic (nucleon) intermediate state contribution, which is calculated exactly, we account for the inelastic intermediate states by expressing the TPE process approximately through the forward doubly virtual Compton scattering. The input in our evaluation is given by the unpolarized proton structure functions and by one subtraction function. For the latter, we provide an explicit evaluation based on a Regge fit of high-energy proton structure function data. It is found that, for the kinematics of the forthcoming muon-proton elastic scattering data of the MUSE experiment, the elastic TPE contribution dominates, and the size of the inelastic TPE contributions is within the anticipated error of the forthcoming data.
Effective-range corrections to three-body recombination for atoms with large scattering length
Hammer, H.-W.; Laehde, Timo A.; Platter, L.
2007-03-15
Few-body systems with large scattering length a have universal properties that do not depend on the details of their interactions at short distances. The rate constant for three-body recombination of bosonic atoms of mass m into a shallow dimer scales as ({Dirac_h}/2{pi})a{sup 4}/m times a log-periodic function of the scattering length. We calculate the leading and subleading corrections to the rate constant, which are due to the effective range of the atoms, and study the correlation between the rate constant and the atom-dimer scattering length. Our results are applied to {sup 4}He atoms as a test case.
Truncation correction for VOI C-arm CT using scattered radiation
NASA Astrophysics Data System (ADS)
Bier, Bastian; Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Xia, Yan; Struffert, Tobias; Hornegger, Joachim
2013-03-01
In C-arm computed tomography, patient dose reduction by volume-of-interest (VOI) imaging is of increasing interest for many clinical applications. A remaining limitation of VOI imaging is the truncation artifact when reconstructing a 3D volume. It can either be cupping towards the boundaries of the field-of-view (FOV) or an incorrect offset in the Hounsfield values of the reconstructed voxels. In this paper, we present a new method for correction of truncation artifacts in a collimated scan. When axial or lateral collimation are applied, scattered radiation still reaches the detector and is recorded outside of the FOV. If the full area of the detector is read out we can use this scattered signal to estimate the truncated part of the object. We apply three processing steps: detection of the collimator edge, adjustment of the area outside the FOV, and interpolation of the collimator edge. Compared to heuristic truncation correction methods we were able to reconstruct high contrast structures like bones outside of the FOV. Inside the FOV we achieved similar reconstruction results as with water cylinder truncation correction. These preliminary results indicate that scattered radiation outside the FOV can be used to improve image quality and further research in this direction seems beneficial.
Compartment modeling of dynamic brain PET—The impact of scatter corrections on parameter errors
Häggström, Ida Karlsson, Mikael; Larsson, Anne; Schmidtlein, C. Ross
2014-11-01
Purpose: The aim of this study was to investigate the effect of scatter and its correction on kinetic parameters in dynamic brain positron emission tomography (PET) tumor imaging. The 2-tissue compartment model was used, and two different reconstruction methods and two scatter correction (SC) schemes were investigated. Methods: The GATE Monte Carlo (MC) software was used to perform 2 × 15 full PET scan simulations of a voxelized head phantom with inserted tumor regions. The two sets of kinetic parameters of all tissues were chosen to represent the 2-tissue compartment model for the tracer 3′-deoxy-3′-({sup 18}F)fluorothymidine (FLT), and were denoted FLT{sub 1} and FLT{sub 2}. PET data were reconstructed with both 3D filtered back-projection with reprojection (3DRP) and 3D ordered-subset expectation maximization (OSEM). Images including true coincidences with attenuation correction (AC) and true+scattered coincidences with AC and with and without one of two applied SC schemes were reconstructed. Kinetic parameters were estimated by weighted nonlinear least squares fitting of image derived time–activity curves. Calculated parameters were compared to the true input to the MC simulations. Results: The relative parameter biases for scatter-eliminated data were 15%, 16%, 4%, 30%, 9%, and 7% (FLT{sub 1}) and 13%, 6%, 1%, 46%, 12%, and 8% (FLT{sub 2}) for K{sub 1}, k{sub 2}, k{sub 3}, k{sub 4}, V{sub a}, and K{sub i}, respectively. As expected, SC was essential for most parameters since omitting it increased biases by 10 percentage points on average. SC was not found necessary for the estimation of K{sub i} and k{sub 3}, however. There was no significant difference in parameter biases between the two investigated SC schemes or from parameter biases from scatter-eliminated PET data. Furthermore, neither 3DRP nor OSEM yielded the smallest parameter biases consistently although there was a slight favor for 3DRP which produced less biased k{sub 3} and K{sub i
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2012-03-01
X-ray scatter is a common cause of image artifacts for cone-beam CT systems due to the expanded field of view and degrades the quantitative accuracy of measured Hounsfield Units (HU). Due to the strong dependency of scatter on the object being scanned, it is crucial to measure the scatter signal for each object. We propose to use a beam pass array (BPA) composed of parallel-holes within a tungsten plate to measure scatter for a dedicated breast CT system. A complete study of the performance of the BPA was conducted. The goal of this study was to explore the feasibility of measuring and compensating for the scatter signal for each individual object. Different clinical study schemes were investigated, including a full rotation scan with BPA and discrete projections acquired with BPA followed by interpolation for full rotation. Different sized cylindrical phantoms and a breast shaped polyethylene phantom were used to test for the robustness of the proposed method. Physically measured scatter signals were converted into scatter to primary ratios (SPRs) at discrete locations through the projection image. A complete noise-free 2D SPR was generated from these discrete measurements. SPR results were compared to Monte Carlo simulation results and scatter corrected CT images were quantitatively evaluated for "cupping" artifact. With the proposed method, a reduction of up to 47 HU of "cupping" was demonstrated. In conclusion, the proposed BPA method demonstrated effective and accurate objectspecific scatter correction with the main advantage of dose-sparing compared to beam stop array (BSA) approaches.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
More accurate X-ray scattering data of deeply supercooled bulk liquid water
Neuefeind, Joerg C; Benmore, Chris J; Weber, Richard; Paschek, Dietmar
2011-01-01
Deeply supercooled water droplets held container-less in an acoustic levitator are investigated with high energy X-ray scattering. The temperature dependence X-ray structure function is found to be non-linear. Comparison with two popular computer models reveals that structural changes are predicted too abrupt by the TIP5P model, while the rate of change predicted by TIP4P is in much better agreement with experiment. The abrupt structural changes predicted by the TIP5P model to occur in the temperature range between 260-240K as water approaches the homogeneous nucleation limit are unrealistic. Both models underestimate the distance between neighbouring oxygen atoms and overestimate the sharpness of the OO distance distribution, indicating that the strength of the H-bond is overestimated in these models.
X-ray scatter correction method for dedicated breast computed tomography
Sechopoulos, Ioannis
2012-05-15
Purpose: To improve image quality and accuracy in dedicated breast computed tomography (BCT) by removing the x-ray scatter signal included in the BCT projections. Methods: The previously characterized magnitude and distribution of x-ray scatter in BCT results in both cupping artifacts and reduction of contrast and accuracy in the reconstructions. In this study, an image processing method is proposed that estimates and subtracts the low-frequency x-ray scatter signal included in each BCT projection postacquisition and prereconstruction. The estimation of this signal is performed using simple additional hardware, one additional BCT projection acquisition with negligible radiation dose, and simple image processing software algorithms. The high frequency quantum noise due to the scatter signal is reduced using a noise filter postreconstruction. The dosimetric consequences and validity of the assumptions of this algorithm were determined using Monte Carlo simulations. The feasibility of this method was determined by imaging a breast phantom on a BCT clinical prototype and comparing the corrected reconstructions to the unprocessed reconstructions and to reconstructions obtained from fan-beam acquisitions as a reference standard. One-dimensional profiles of the reconstructions and objective image quality metrics were used to determine the impact of the algorithm. Results: The proposed additional acquisition results in negligible additional radiation dose to the imaged breast ({approx}0.4% of the standard BCT acquisition). The processed phantom reconstruction showed substantially reduced cupping artifacts, increased contrast between adipose and glandular tissue equivalents, higher voxel value accuracy, and no discernible blurring of high frequency features. Conclusions: The proposed scatter correction method for dedicated breast CT is feasible and can result in highly improved image quality. Further optimization and testing, especially with patient images, is necessary to
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik
2011-08-21
In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559
Correction of radiation absorption on biological samples using Rayleigh to Compton scattering ratio
NASA Astrophysics Data System (ADS)
Pereira, Marcelo O.; Conti, Claudio de Carvalho; dos Anjos, Marcelino J.; Lopes, Ricardo T.
2012-06-01
The aim of this work was to develop a method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E < 30 keV) applied to a biological matrix based on the Rayleigh to Compton scattering ratio and the effective atomic number. For calibration, scattering measurements were performed on standard samples of radiation produced by a gamma-ray source of 241Am (59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 1557B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C), in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples.
Wide angle Compton scattering on the proton: study of power suppressed corrections
NASA Astrophysics Data System (ADS)
Kivel, N.; Vanderhaeghen, M.
2015-10-01
We study the wide angle Compton scattering process on a proton within the soft-collinear factorization (SCET) framework. The main purpose of this work is to estimate the effect due to certain power suppressed corrections. We consider all possible kinematical power corrections and also include the subleading amplitudes describing the scattering with nucleon helicity flip. Under certain assumptions we present a leading-order factorization formula for these amplitudes which includes the hard- and soft-spectator contributions. We apply the formalism and perform a phenomenological analysis of the cross section and asymmetries in the wide angle Compton scattering on a proton. We assume that in the relevant kinematical region where -t,-u>2.5 GeV2 the dominant contribution is provided by the soft-spectator mechanism. The hard coefficient functions of the corresponding SCET operators are taken in the leading-order approximation. The analysis of existing cross section data shows that the contribution of the helicity-flip amplitudes to this observable is quite small and comparable with other expected theoretical uncertainties. We also show predictions for double polarization observables for which experimental information exists.
Sivanesan, Arumugam; Adamkiewicz, Witold; Kalaivani, Govindasamy; Kamińska, Agnieszka; Waluk, Jacek; Hołyst, Robert; Izake, Emad L
2015-01-21
Correction for 'Towards improved precision in the quantification of surface-enhanced Raman scattering (SERS) enhancement factors: a renewed approach' by Arumugam Sivanesan et al., Analyst, 2015, DOI:10.1039/c4an01778a PMID:25453040
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
Implementation of an analytical Raman scattering correction for satellite ocean-color processing.
McKinna, Lachlan I W; Werdell, P Jeremy; Proctor, Christopher W
2016-07-11
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a time-series study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs. PMID:27410899
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; Thomas, Anthony W.
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Q_{weak} experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; Thomas, Anthony W.
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Qweak experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q^{2}. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
{gamma}Z corrections to forward-angle parity-violating ep scattering
Sibirtsev, A.; Blunden, P. G.; Melnitchouk, W.; Thomas, A. W.
2010-07-01
We use dispersion relations to evaluate the {gamma}Z box contribution to parity-violating electron scattering in the forward limit arising from the axial-vector coupling at the electron vertex. The calculation makes full use of the critical constraints from recent JLab data on electroproduction in the resonance region as well as high-energy data from HERA. At the kinematics of the Q{sub weak} experiment, this gives a correction of 0.0047{sub -0.0004}{sup +0.0011} to the standard model value 0.0713(8) of the proton weak charge. While the magnitude of the correction is highly significant, the uncertainty is within the anticipated experimental uncertainty of {+-}0.003.
Gakh, G. I.; Konchatnij, M. I. Merenkov, N. P.
2012-08-15
The model-independent QED radiative corrections to polarization observables in elastic scattering of unpolarized and longitudinally polarized electron beams by a deuteron target are calculated in leptonic variables. The experimental setup when the deuteron target is arbitrarily polarized is considered and the procedure for applying the derived results to the vector or tensor polarization of the recoil deuteron is discussed. The calculation is based on taking all essential Feynman diagrams into account, which results in the form of the Drell-Yan representation for the cross section, and the use of the covariant parameterization of the deuteron polarization state. Numerical estimates of the radiative corrections are given in the case where event selection allows undetected particles (photons and electron-positron pairs) and the restriction on the lost invariant mass is used.
NASA Astrophysics Data System (ADS)
Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Young-Wook; Choi, Jae-Gu
2014-12-01
Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.
An eigenvalue correction due to scattering by a rough wall of an acoustic waveguide.
Krynkin, Anton; Horoshenkov, Kirill V; Tait, Simon J
2013-08-01
In this paper a derivation of the attenuation factor in a waveguide with stochastic walls is presented. The perturbation method and Fourier analysis are employed to derive asymptotically consistent boundary-value problems at each asymptotic order. The derived approximation predicts the attenuation of the propagating mode in a rough waveguide through a correction to the eigenvalue corresponding to smooth walls. The proposed approach can be used to derive results that are consistent with those obtained by Bass et al. [IEEE Trans. Antennas Propag. 22, 278-288 (1974)]. The novelty of the method is that it does not involve the integral Dyson-type equation and, as a result, the large number of statistical moments included in the equation in the form of the mass operator of the volume scattering theory. The derived eigenvalue correction is described by the correlation function of the randomly rough surface. The averaged solution in the plane wave regime is approximated by the exponential function dependent on the derived eigenvalue correction. The approximations are compared with numerical results obtained using the finite element method (FEM). An approach to retrieve the correct deviation in roughness height and correlation length from multiple numerical realizations of the stochastic surface is proposed to account for the oversampling of the rough surface occurring in the FEM meshing procedure. PMID:23927093
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-08-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)—a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
2013-01-01
Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of
Park, Y; Winey, B; Sharp, G
2014-06-01
Purpose: To demonstrate feasibility of proton dose calculation on scattercorrected CBCT images for the purpose of adaptive proton therapy. Methods: Two CBCT image sets were acquired from a prostate cancer patient and a thorax phantom using an on-board imaging system of an Elekta infinity linear accelerator. 2-D scatter maps were estimated using a previously introduced CT-based technique, and were subtracted from each raw projection image. A CBCT image set was then reconstructed with an open source reconstruction toolkit (RTK). Conversion from the CBCT number to HU was performed by soft tissue-based shifting with reference to the plan CT. Passively scattered proton plans were simulated on the plan CT and corrected/uncorrected CBCT images using the XiO treatment planning system. For quantitative evaluation, water equivalent path length (WEPL) was compared in those treatment plans. Results: The scatter correction method significantly improved image quality and HU accuracy in the prostate case where large scatter artifacts were obvious. However, the correction technique showed limited effects on the thorax case that was associated with fewer scatter artifacts. Mean absolute WEPL errors from the plans with the uncorrected and corrected images were 1.3 mm and 5.1 mm in the thorax case and 13.5 mm and 3.1 mm in the prostate case. The prostate plan dose distribution of the corrected image demonstrated better agreement with the reference one than that of the uncorrected image. Conclusion: A priori CT-based CBCT scatter correction can reduce the proton dose calculation error when large scatter artifacts are involved. If scatter artifacts are low, an uncorrected CBCT image is also promising for proton dose calculation when it is calibrated with the soft-tissue based shifting.
NASA Astrophysics Data System (ADS)
Ramamurthy, Senthil; D'Orsi, Carl J.; Sechopoulos, Ioannis
2016-02-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.
2016-01-01
Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These
NASA Astrophysics Data System (ADS)
Chen, J.; Zebker, H. A.; Knight, R. J.
2015-12-01
InSAR is commonly used to measure surface deformation between different radar passes at cm-scale accuracy and m-scale resolution. However, InSAR measurements are often decorrelated due to vegetation growth, which greatly limits high quality InSAR data coverage. Here we present an algorithm for retrieving InSAR deformation measurements over areas with significant vegetation decorrelation through the use of adaptive interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. The interpolation filter restores phase continuity in space and greatly reduces errors in phase unwrapping. We apply this algorithm to process L-band ALOS interferograms acquired over the San Luis Valley, Colorado and the Tulare Basin, California. In both areas, groundwater extraction for irrigation results in land deformation that can be detected using InSAR. We show that the PS-based algorithm reduces the artifacts from vegetation decorrelation while preserving the deformation signature. The spatial sampling resolution achieved over agricultural fields is on the order of hundreds of meters, usually sufficient for groundwater studies. The improved InSAR data allow us further to reconstruct the SBAS ground deformation time series and transform the measured deformation to head levels using the skeletal storage coefficient and time delay constant inferred from a joint InSAR-well data analysis. The resulting InSAR-head and well-head measurements in the San Luis valley show good agreement with primary confined aquifer pumping activities. This case study demonstrates that high quality InSAR deformation data can be obtained over vegetation-decorrrelated region if processed correctly.
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis.
Lu, Yihuan; Peng, Boyu; Lau, Beverly A; Hu, Yue-Houng; Scaduto, David A; Zhao, Wei; Gindi, Gene
2015-08-21
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)--a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
Noncommutative correction to Aharonov-Bohm scattering: A field theory approach
Anacleto, M.A.; Gomes, M.; Silva, A.J. da; Spehler, D.
2004-10-15
We study a noncommutative nonrelativistic theory in 2+1 dimensions of a scalar field coupled to the Chern-Simons field. In the commutative situation this model has been used to simulate the Aharonov-Bohm effect in the field theory context. We verified that, contrary to the commutative result, the inclusion of a quartic self-interaction of the scalar field is not necessary to secure the ultraviolet renormalizability of the model. However, to obtain a smooth commutative limit the presence of a quartic gauge invariant self-interaction is required. For small noncommutativity we fix the corrections to the Aharonov-Bohm scattering and prove that up to one loop the model is free from dangerous infrared/ultraviolet divergences.
Self-interaction correction in multiple scattering theory: application to transition metal oxides
Daene, Markus W; Lueders, Martin; Ernst, Arthur; Diemo, Koedderitzsch; Temmerman, Walter M; Szotek, Zdzislawa; Wolfam, Hergert
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare the CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2013-01-01
Biological fluids fulfill key functionalities such as hydrating, protecting, and nourishing cells and tissues in various organ systems. They are capable of these versatile tasks owing to their distinct structural and viscoelastic properties. Characterizing the viscoelastic properties of bio-fluids is of pivotal importance for monitoring the development of certain pathologies as well as engineering synthetic replacements. Laser Speckle Rheology (LSR) is a novel optical technology that enables mechanical evaluation of tissue. In LSR, a coherent laser beam illuminates the tissue and temporal speckle intensity fluctuations are analyzed to evaluate mechanical properties. The rate of temporal speckle fluctuations is, however, influenced by both optical and mechanical properties of tissue. Therefore, in this paper, we develop and validate an approach to estimate and compensate for the contributions of light scattering to speckle dynamics and demonstrate the capability of LSR for the accurate extraction of viscoelastic moduli in phantom samples and biological fluids of varying optical and mechanical properties. PMID:23705028
Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections
Kappadath, S. Cheenu; Shaw, Chris C.
2005-11-15
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMA phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter
Probing spectator scattering and annihilation corrections in Bs→P V decays
NASA Astrophysics Data System (ADS)
Chang, Qin; Hu, Xiaohui; Sun, Junfeng; Yang, Yueling
2015-04-01
Motivated by the recent LHCb measurements on B¯s→π-K*+ and B¯s→K±K*∓ decay modes, we revisit the Bs→P V decays within QCD factorization framework. The effects of hard-spectator scattering and annihilation corrections are studied in detail. After performing a χ2-fit on the end-point parameters XAi ,f(ρAi ,f,ϕAi ,f) and XH(ρH,ϕH) with available data, it is found that although some possible mismatches exist, the universalities of XAi ,f and XH in Bs and Bu ,d systems are still allowed within theoretical uncertainties and experimental errors. With the end-point parameters obtained from Bu ,d→P V decays, the numerical results and detailed analyses for the observables of B¯s→π K*, ρ K , π ρ , π ϕ , and K ϕ decay modes are presented. In addition, we have identified a few useful observables, especially the ones of B¯s→π0ϕ decay for instance, for probing hard-spectator scattering and annihilation contributions.
NASA Astrophysics Data System (ADS)
Ryu, Y.; Kobayashi, H.; Welles, J.; Norman, J.
2011-12-01
Correct estimation of gap fraction is essential to quantify canopy architectural variables such as leaf area index and clumping index, which mainly control land-atmosphere interactions. However, gap fraction measurements from optical sensors are contaminated by scattered radiation by canopy and ground surface. In this study, we propose a simple invertible bidirectional transmission model to remove scattering effects from gap fraction measurements. The model shows that 1) scattering factor appears highest where leaf area index is 1-2 in non-clumped canopy, 2) relative scattering factor (scattering factor/measured gap fraction) increases with leaf area index, 3) bright land surface (e.g. snow and bright soil) can contribute a significant scattering factor, 4) the scattering factor is not marginal even in highly diffused sky condition. By incorporating the model with LAI2200 data collected in an open savanna ecosystem, we find that the scattering factor causes significant underestimation of leaf area index (25%) and significant overestimation of clumping index (6 %). The results highlight that some LAI-2000-based LAI estimates from around the world may be underestimated, particularly in highly clumped broad-leaf canopies. Fortunately, the importance of scattering could be assessed with software from LICOR, Inc., which will incorporate the scattering model from this study in a post processing mode after data has been collected by a LAI-2000 or LAI-2200.
Bednarz, Bryan; Lu, Hsiao-Ming; Engelsman, Martijn; Paganetti, Harald
2011-01-01
Monte Carlo models of proton therapy treatment heads are being used to improve beam delivery systems and to calculate the radiation field for patient dose calculations. The achievable accuracy of the model depends on the exact knowledge of the treatment head geometry and time structure, the material characteristics, and the underlying physics. This work aimed at studying the uncertainties in treatment head simulations for passive scattering proton therapy. The sensitivities of spread-out Bragg peak (SOBP) dose distributions on material densities, mean ionization potentials, initial proton beam energy spread and spot size were investigated. An improved understanding of the nature of these parameters may help to improve agreement between calculated and measured SOBP dose distributions and to ensure that the range, modulation width, and uniformity are within clinical tolerance levels. Furthermore, we present a method to make small corrections to the uniformity of spread-out Bragg peaks by utilizing the time structure of the beam delivery. In addition, we re-commissioned the models of the two proton treatment heads located at our facility using the aforementioned correction methods presented in this paper. PMID:21478569
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Cheng, Chuanfu; Teng, Shuyun; Chen, Xiaoyi; Xu, Zhizhan
2007-09-01
A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, m. The average analog of the m samplings output by the Boxcar enhances the signal-to-noise ratio by √{m}, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/√{m}. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique.
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-15
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Darboux, Michel; Bordy, Thomas; Robert-Coutant, Christine; Gonon, Georges
2004-05-01
At CEA-LETI, a DEXA approach for systems using a digital 2D radiographic detector has been developed. It relies on an original X-rays scatter management method, based on a combined use of an analytical model and of scatter calibration data acquired through different thicknesses of Lucite slabs. Since Lucite X-rays interaction properties are equivalent to fat, the approach leads to a scatter flux map representative of a 100% fat region. However, patients" soft tissues are composed of lean and fat. Therefore, the obtained scatter map has to be refined in order to take into account the various fat ratios that can present patients. This refinement consists in establishing a formula relating the fat ratio to the thicknesses of Low and High Energy Lucite slabs leading to same signal level. This proportion is then used to compute, on the basis of X-rays/matter interaction equations, correction factors to apply to Lucite equivalent X-rays scatter map. Influence of fat ratio correction has been evaluated, on a digital 2D bone densitometer, with phantoms composed of a PVC step (simulating bone) and different Lucite/water thicknesses as well as on patients. The results show that our X-rays scatter determination approach can take into account variations of body composition.
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Radiative corrections to the elastic e-p and mu-p scattering in Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Koshchii, Oleksandr; Afanasev, Andrei; MUSE Collaboration
2015-04-01
In this paper, we calculated exactly lepton mass corrections for the elastic e-p and mu-p scatterings using the ELRADGEN 2.1 Monte Carlo generator. These estimations are essential to be used in the MUSE experiment that is designed to solve the proton radius puzzle. This puzzle is due to the fact that two methods of measuring proton radius (the spectroscopy method, which measures proton energy levels in hydrogen, and the electron scattering experiment) predicted the radius to be 0.8768 +/-0.0069 fm, whereas the experiment that used muonic hydrogen provided the value that is 5% smaller. Since the radiative corrections are different for electrons and muons due to their mass difference, these corrections are extremely important for analysis and interpretation of upcoming MUSE data.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and
Gallandi, Lukas; Marom, Noa; Rinke, Patrick; Körzdörfer, Thomas
2016-02-01
The performance of non-empirically tuned long-range corrected hybrid functionals for the prediction of vertical ionization potentials (IPs) and electron affinities (EAs) is assessed for a set of 24 organic acceptor molecules. Basis set-extrapolated coupled cluster singles, doubles, and perturbative triples [CCSD(T)] calculations serve as a reference for this study. Compared to standard exchange-correlation functionals, tuned long-range corrected hybrid functionals produce highly reliable results for vertical IPs and EAs, yielding mean absolute errors on par with computationally more demanding GW calculations. In particular, it is demonstrated that long-range corrected hybrid functionals serve as ideal starting points for non-self-consistent GW calculations. PMID:26731340
2015-11-01
In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278
Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science
2007-06-01
A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.
Modulator design for x-ray scatter correction using primary modulation: Material selection
Gao Hewei; Zhu Lei; Fahrig, Rebecca
2010-08-15
Purpose: An optimal material selection for primary modulator is proposed in order to minimize beam hardening of the modulator in x-ray cone-beam computed tomography (CBCT). Recently, a measurement-based scatter correction method using primary modulation has been developed and experimentally verified. In the practical implementation, beam hardening of the modulator blocker is a limiting factor because it causes inconsistency in the primary signal and therefore degrades the accuracy of scatter correction. Methods: This inconsistency can be purposely assigned to the effective transmission factor of the modulator whose variation as a function of object filtration represents the magnitude of beam hardening of the modulator. In this work, the authors show that the variation reaches a minimum when the K-edge of the modulator material is near the mean energy of the system spectrum. Accordingly, an optimal material selection can be carried out in three steps. First, estimate and evaluate the polychromatic spectrum for a given x-ray system including both source and detector; second, calculate the mean energy of the spectrum and decide the candidate materials whose K-edge energies are near the mean energy; third, select the optimal material from the candidates after considering both the magnitude of beam hardening and the physical and chemical properties. Results: A tabletop x-ray CBCT system operated at 120 kVp is used to validate the material selection method in both simulations and experiments, from which the optimal material for this x-ray system is then chosen. With the transmission factor initially being 0.905 and 0.818, simulations show that erbium provides the least amount of variation as a function of object filtrations (maximum variations are 2.2% and 4.3%, respectively, only one-third of that for copper). With different combinations of aluminum and copper filtrations (simulating a range of object thicknesses), measured overall variations are 2.5%, 1.0%, and 8
2015-12-01
In the article by Narayan et al (Narayan O, Davies JE, Hughes AD, Dart AM, Parker KH, Reid C, Cameron JD. Central aortic reservoir-wave analysis improves prediction of cardiovascular events in elderly hypertensives. Hypertension. 2015;65:629–635. doi: 10.1161/HYPERTENSIONAHA.114.04824), which published online ahead of print December 22, 2014, and appeared in the March 2015 issue of the journal, some corrections were needed.On page 632, Figure, panel A, the label PRI has been corrected to read RPI. In panel B, the text by the upward arrow, "10% increase in kd,” has been corrected to read, "10% decrease in kd." The corrected figure is shown below.The authors apologize for these errors. PMID:26558821
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
NASA Astrophysics Data System (ADS)
Blanco, Francisco; Ellis-Gibbings, Lilian; García, Gustavo
2016-02-01
An improvement of the screening-corrected Additivity Rule (SCAR) is proposed for calculating electron and positron scattering cross sections from polyatomic molecules within the independent atom model (IAM), following the analysis of numerical solutions to the three-dimensional Lippmann-Schwinger equation for multicenter potentials. Interference contributions affect all the considered energy range (1-300 eV); the lower energies where the atomic screening is most effective and higher energies, where interatomic distances are large compared to total cross sections and electron wavelengths. This correction to the interference terms provides a significant improvement for both total and differential elastic cross sections at these energies.
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.
A. Afanasev, I. Akushevich, A. Ilyichev, N. Merenkov
2003-09-01
The main features of the electron structure method for calculations of the higher order QED radiative effects to polarized deep-inelastic ep-scattering are presented. A new FORTRAN code ESFRAD based on this method was developed. A detailed quantitative comparison between the results of ESFRAD and other methods implemented in the codes POLRAD and RADGEN for calculation of the higher order radiative corrections is performed.
NASA Astrophysics Data System (ADS)
Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander
2006-10-01
The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
NASA Astrophysics Data System (ADS)
1995-04-01
Seismic images of the Brooks Range, Arctic Alaska, reveal crustal-scale duplexing: Correction Geology, v. 23, p. 65 68 (January 1995) The correct Figure 4A, for the loose insert, is given here. See Figure 4A below. Corrected inserts will be available to those requesting copies of the article from the senior author, Gary S. Fuis, U.S. Geological Survey, 345 Middlefield Road, Menlo Park, CA 94025. Figure 4A. P-wave velocity model of Brooks Range region (thin gray contours) with migrated wide-angle reflections (heavy red lines) and migreated vertical-incidence reflections (short black lines) superimposed. Velocity contour interval is 0.25 km/s; 4,5, and 6 km/s contours are labeled. Estimated error in velocities is one contour interval. Symbols on faults shown at top are as in Figure 2 caption.
Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies
NASA Astrophysics Data System (ADS)
Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing
2016-03-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.
2016-02-01
Neogi T, Jansen TLTA, Dalbeth N, et al. 2015 Gout classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative. Ann Rheum Dis 2015;74:1789–98. The name of the 20th author was misspelled. The correct spelling is Janitzia Vazquez-Mellado. We regret the error. PMID:26881284
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-14
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
2016-02-01
In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
Ramamurthy, S; Sechopoulos, I
2014-06-15
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
Park, C G; Ha, B
1995-09-01
Most of the attempts and efforts in cleft lip repair have been directed toward the skin incision. The importance of the orbicularis oris muscle repair has been emphasized in recent years. The well-designed skin incision with simple repair of the orbicularis oris muscle has produced a considerable improvement in the appearance of the upper lip; however, the repaired upper lip seems to change its shape abnormally in motion and has a tendency to be distorted with age if the orbicularis oris muscle is not repaired precisely and accurately. Following the dissection of the normal upper lip and unilateral cleft lip in cadavers, we could find two different components in the orbicularis oris muscle, a superficial and a deep component. One is a retractor and the other is a constrictor of the lip. They have antagonistic actions to each other during lip movement. We also can identify these two different components of the muscle in the cleft lip patient during operation. We thought inaccurate and mixed connection between these two different functional components could make the repaired lip distorted and unbalanced, which would get worse during growth. By identification and separate repair of the two different muscular components of the orbicularis oris muscle (i.e., repair of the superficial and deep components on the lateral side with the corresponding components on the medial side), better results in the dynamic and three-dimensional configuration of the upper lip can be achieved, and unfavorable distortion can be avoided as the patients grow.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7652051
NASA Astrophysics Data System (ADS)
Roger, Michel; Moreau, Stéphane
2005-09-01
A previously published analytical formulation aimed at predicting broadband trailing-edge noise of subsonic airfoils is extended here to account for all the effects due to a limited chord length, and to infer the far-field radiation off the mid-span plane. Three-dimensional gusts are used to simulate the incident aerodynamic wall pressure that is scattered as acoustic waves. A leading-edge back-scattering correction is derived, based on the solution of an equivalent Schwarzschild problem, and added to the original formula. The full solution is found to agree very well with other analytical results based on a vanishing Mach number Green's function tailored to a finite-chord flat plate and sources close to the trailing edge. Furthermore, it is valid for any subsonic ambient mean flow velocity. The back-scattering correction is shown to have a significant effect at lower reduced frequencies, for which the airfoil chord is acoustically compact, and at the transition between supercritical and subcritical gusts. It may be important for small-size airfoils, such as automotive fan blades and similar technologies. The final far-field noise formula can be used to predict trailing-edge noise in an arbitrary configuration, provided that a minimum statistical description of the aerodynamic pressure fluctuations on the airfoil surface close to the trailing edge is available.
Fortmann, Carsten; Wierling, August; Roepke, Gerd
2010-02-15
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r{sub s}. These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
Fortmann, Carsten; Wierling, August; Röpke, Gerd
2010-02-01
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed. PMID:20365663