Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Monte Carlo scatter correction for SPECT
NASA Astrophysics Data System (ADS)
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Asymmetric scatter kernels for software-based scatter correction of gridless mammography
NASA Astrophysics Data System (ADS)
Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh
2015-03-01
Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.
Onboard Autonomous Corrections for Accurate IRF Pointing.
NASA Astrophysics Data System (ADS)
Jorgensen, J. L.; Betto, M.; Denver, T.
2002-05-01
filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.
Scatter corrections for cone beam optical CT
NASA Astrophysics Data System (ADS)
Olding, Tim; Holmes, Oliver; Schreiner, L. John
2009-05-01
Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.
Scattering corrections in neutron radiography using point scattered functions
NASA Astrophysics Data System (ADS)
Kardjilov, N.; de Beer, F.; Hassanein, R.; Lehmann, E.; Vontobel, P.
2005-04-01
Scattered neutrons cause distortions and blurring in neutron radiography pictures taken at small distances between the investigated object and the detector. This defines one of the most significant problems in quantitative neutron radiography. The quantification of strong scattering materials such as hydrogenous materials—water, oil, plastic, etc.—with a high precision is very difficult due to the scattering effect in the radiography images. The scattering contribution in liquid test samples (H 2O, D 2O and a special type oil ISOPAR L) at different distances between the samples and the detector, the so-called Point Scattered Function (PScF), was calculated with the help of MCNP-4C Monte Carlo code. Corrections of real experimental data were performed using the calculated PScF. Some of the results as well as the correction algorithm will be presented.
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Dispersion corrections to parity violating electron scattering
Gorchtein, M.; Horowitz, C. J.; Ramsey-Musolf, M. J.
2010-08-04
We consider the dispersion correction to elastic parity violating electron-proton scattering due to {gamma}Z exchange. In a recent publication, this correction was reported to be substantially larger than the previous estimates. In this paper, we study the dispersion correction in greater detail. We confirm the size of the disperion correction to be {approx}6% for the QWEAK experiment designed to measure the proton weak charge. We enumerate parameters that have to be constrained to better than relative 30% in order to keep the theoretical uncertainty for QWEAK under control.
Correction of sunspot intensities for scattered light
NASA Technical Reports Server (NTRS)
Mullan, D. J.
1973-01-01
Correction of sunspot intensities for scattered light usually involves fitting theoretical curves to observed aureoles (Zwaan, 1965; Staveland, 1970, 1972). In this paper we examine the inaccuracies in the determination of scattered light by this method. Earlier analyses are extended to examine uncertainties due to the choice of the expression for limb darkening. For the spread function, we consider Lorentzians and Gaussians for which analytic expressions for the aureole can be written down. Lorentzians lead to divergence and normalization difficulties, and should not be used in scattered light determinations. Gaussian functions are more suitable.
Quadratic electroweak corrections for polarized Moller scattering
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
SPECT Compton-scattering correction by analysis of energy spectra.
Koral, K F; Wang, X Q; Rogers, W L; Clinthorne, N H; Wang, X H
1988-02-01
The hypothesis that energy spectra at individual spatial locations in single photon emission computed tomographic projection images can be analyzed to separate the Compton-scattered component from the unscattered component is tested indirectly. An axially symmetric phantom consisting of a cylinder with a sphere is imaged with either the cylinder or the sphere containing 99mTc. An iterative peak-erosion algorithm and a fitting algorithm are given and employed to analyze the acquired spectra. Adequate separation into an unscattered component and a Compton-scattered component is judged on the basis of filtered-backprojection reconstruction of corrected projections. In the reconstructions, attenuation correction is based on the known geometry and the total attenuation cross section for water. An independent test of the accuracy of separation is not made. For both algorithms, reconstructed slices for the cold-sphere, hot-surround phantom have the correct shape as confirmed by simulation results that take into account the measured dependence of system resolution on depth. For the inverse phantom, a hot sphere in a cold surround, quantitative results with the fitting algorithm are accurate but with a particular number of iterations of the erosion algorithm are less good. (A greater number of iterations would improve the 26% error with the algorithm, however.) These preliminary results encourage us to believe that a method for correcting for Compton-scattering in a wide variety of objects can be found, thus helping to achieve quantitative SPECT. PMID:3258023
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
NASA Astrophysics Data System (ADS)
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung
2014-03-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of
Monte Carlo-based down-scatter correction of SPECT attenuation maps.
Bokulić, Tomislav; Vastenhouw, Brendan; de Jong, Hugo W A M; van Dongen, Alice J; van Rijk, Peter P; Beekman, Freek J
2004-08-01
Combined acquisition of transmission and emission data in single-photon emission computed tomography (SPECT) can be used for correction of non-uniform photon attenuation. However, down-scatter from a higher energy isotope (e.g. 99mTc) contaminates lower energy transmission data (e.g. 153Gd, 100 keV), resulting in underestimation of reconstructed attenuation coefficients. Window-based corrections are often not very accurate and increase noise in attenuation maps. We have developed a new correction scheme. It uses accurate scatter modelling to avoid noise amplification and does not require additional energy windows. The correction works as follows: Initially, an approximate attenuation map is reconstructed using down-scatter contaminated transmission data (step 1). An emission map is reconstructed based on the contaminated attenuation map (step 2). Based on this approximate 99mTc reconstruction and attenuation map, down-scatter in the 153Gd window is simulated using accelerated Monte Carlo simulation (step 3). This down-scatter estimate is used during reconstruction of a corrected attenuation map (step 4). Based on the corrected attenuation map, an improved 99mTc image is reconstructed (step 5). Steps 3-5 are repeated to incrementally improve the down-scatter estimate. The Monte Carlo simulator provides accurate down-scatter estimation with significantly less noise than down-scatter estimates acquired in an additional window. Errors in the reconstructed attenuation coefficients are reduced from ca. 40% to less than 5%. Furthermore, artefacts in 99mTc emission reconstructions are almost completely removed. These results are better than for window-based correction, both in simulation experiments and in physical phantom experiments. Monte Carlo down-scatter simulation in concert with statistical reconstruction provides accurate down-scatter correction of attenuation maps. PMID:15034678
Using BRDFs for accurate albedo calculations and adjacency effect corrections
Borel, C.C.; Gerstl, S.A.W.
1996-09-01
In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.
Correction to Molière's formula for multiple scattering
NASA Astrophysics Data System (ADS)
Lee, R. N.; Milstein, A. I.
2009-06-01
The semiclassical correction to Molière’s formula for multiple scattering is derived. The consideration is based on the scattering amplitude obtained with the first semiclassical correction taken into account for an arbitrary localized but not spherically symmetric potential. Unlike the leading term, the correction to Molière’s formula contains the target density n and thickness L not only in the combination nL (areal density). Therefore, this correction can be referred to as the bulk density correction. It turns out that the bulk density correction is small even for high density. This result explains the wide range of applicability of Molière’s formula.
A spectrally accurate algorithm for electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Ganesh, M.; Hawkins, S.
2006-09-01
In this work we develop, implement and analyze a high-order spectrally accurate algorithm for computation of the echo area, and monostatic and bistatic radar cross-section (RCS) of a three dimensional perfectly conducting obstacle through simulation of the time-harmonic electromagnetic waves scattered by the conductor. Our scheme is based on a modified boundary integral formulation (of the Maxwell equations) that is tolerant to basis functions that are not tangential on the conductor surface. We test our algorithm with extensive computational experiments using a variety of three dimensional perfect conductors described in spherical coordinates, including benchmark radar targets such as the metallic NASA almond and ogive. The monostatic RCS measurements for non-convex conductors require hundreds of incident waves (boundary conditions). We demonstrate that the monostatic RCS of small (to medium) sized conductors can be computed using over one thousand incident waves within a few minutes (to a few hours) of CPU time. We compare our results with those obtained using method of moments based industrial standard three dimensional electromagnetic codes CARLOS, CICERO, FE-IE, FERM, and FISC. Finally, we prove the spectrally accurate convergence of our algorithm for computing the surface current, far-field, and RCS values of a class of conductors described globally in spherical coordinates.
Novel scatter compensation of list-mode PET data using spatial and energy dependent corrections
Guérin, Bastien
2011-01-01
With the widespread use of PET crystals with greatly improved energy resolution (e.g., 11.5% with LYSO as compared to 20% with BGO) and of list-mode acquisitions, the use of the energy of individual events in scatter correction schemes becomes feasible. We propose a novel scatter approach that incorporates the energy of individual photons in the scatter correction and reconstruction of list-mode PET data in addition to the spatial information presently used in clinical scanners. First, we rewrite the Poisson likelihood function of list-mode PET data including the energy distributions of primary and scatter coincidences and show that this expression yields an MLEM reconstruction algorithm containing both energy and spatial dependent corrections. To estimate the spatial distribution of scatter coincidences we use the single scatter simulation (SSS). Next, we derive two new formulae which allow estimation of the 2D (coincidences) energy probability density functions (E-PDF) of primary and scatter coincidences from the 1D (photons) E-PDFs associated with each photon. We also describe an accurate and robust object-specific method for estimating these 1D E-PDFs based on a decomposition of the total energy spectra detected across the scanner into primary and scattered components. Finally, we show that the energy information can be used to accurately normalize the scatter sinogram to the data. We compared the performance of this novel scatter correction incorporating both the position and energy of detected coincidences to that of the traditional approach modeling only the spatial distribution of scatter coincidences in 3D Monte Carlo simulations of a medium cylindrical phantom and a large, non uniform NCAT phantom. Incorporating the energy information in the scatter correction decreased bias in the activity distribution estimation by ~20% and ~40% in the cold regions of the large NCAT phantom at energy resolutions 11.5 and 20% at 511 keV, respectively, compared to when
Practical correction procedures for elastic electron scattering effects in ARXPS
NASA Astrophysics Data System (ADS)
Lassen, T. S.; Tougaard, S.; Jablonski, A.
2001-06-01
Angle-resolved XPS and AES (ARXPS and ARAES) are widely used for determination of the in-depth distribution of elements in the surface region of solids. It is well known that elastic electron scattering has a significant effect on the intensity as a function of emission angle and that this has a great influence on the determined overlayer thicknesses by this method. However the applied procedures for ARXPS and ARAES generally neglect this because no simple and practical procedure for correction has been available. However recently, new algorithms have been suggested. In this paper, we have studied the efficiency of these algorithms to correct for elastic scattering effects in the interpretation of ARXPS and ARAES. This is done by first calculating electron distributions by Monte Carlo simulations for well-defined overlayer/substrate systems and then to apply the different algorithms. We have found that an analytical formula based on a solution of the Boltzmann transport equation provides a good account for elastic scattering effects. However this procedure is computationally very slow and the underlying algorithm is complicated. Another much simpler algorithm, proposed by Nefedov and coworkers, was also tested. Three different ways of handling the scattering parameters within this model were tested and it was found that this algorithm also gives a good description for elastic scattering effects provided that it is slightly modified so that it takes into account the differences in the transport properties of the substrate and the overlayer. This procedure is fairly simple and is described in detail. The model gives a much more accurate description compared to the traditional straight-line approximation (SLA). However it is also found that when attenuation lengths instead of inelastic mean free paths are used in the simple SLA formalism, the effects of elastic scattering are also reasonably well accounted for. Specifically, from a systematic study of several
Accurate source location from P waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.
2015-12-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.
Accurate source location from waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei
2016-06-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.
NASA Astrophysics Data System (ADS)
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Kim, Hee-Joung
2014-08-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, making up the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement- and non-measurement-based methods, have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate the primary radiation. Cylindrical phantoms of variable size were used to quantify the imaging performance. For scatter estimates, we used discrete Fourier transform filtering, e.g., a Gaussian low-high pass filter with a cut-off frequency. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without scatter correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without the correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without the correction. In the subtraction study, the average CNR with the correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of the scatter correction and the
Quantitative fully 3D PET via model-based scatter correction
Ollinger, J.M.
1994-05-01
We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.
Accurate scatter compensation using neural networks in radionuclide imaging
Ogawa, Koichi; Nishizaki, N. . Dept. of Electrical Engineering)
1993-08-01
The paper presents a new method to estimate primary photons using an artificial neural network in radionuclide imaging. The neural network for [sup 99m]Tc had three layers, i.e., one input layer with five units, one hidden layer with five units, and one output layer with two units. As input values to the input units, the authors used count ratios which were the ratios of the counts acquired by narrow windows to the total count acquired by a broad window with the energy range from 125 to 154 keV. The outputs were a scatter count ratio and a primary count ratio. Using the primary count ratio and the total count they calculated the primary count of the pixel directly. The neural network was trained with a back-propagation algorithm using calculated true energy spectra obtained by a Monte Carlo method. The simulation showed that an accurate estimation of primary photons was accomplished within an error ratio of 5% for primary photons.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient
Scatter analysis and correction for ultrafast X-ray tomography.
Wagner, Michael; Barthel, Frank; Zalucky, Johannes; Bieberle, Martina; Hampel, Uwe
2015-06-13
Ultrafast X-ray computed tomography (CT) is an imaging technique with high potential for the investigation of the hydrodynamics in multiphase flows. For correct determination of the phase distribution of such flows, a high accuracy of the reconstructed image data is essential. In X-ray CT, radiation scatter may cause disturbing artefacts. As the scattering is not considered in standard reconstruction algorithms, additional methods are necessary to correct the detector readings or to prevent the detection of scattered photons. In this paper, we present an analysis of the scattering background for the ultrafast X-ray CT imaging system ROFEX at the Helmholtz-Zentrum Dresden-Rossendorf and propose a correction technique based on collimation and deterministic simulation of first-order scattering. PMID:25939622
Lorentz violation correction to the Aharonov-Bohm scattering
NASA Astrophysics Data System (ADS)
Anacleto, M. A.
2015-10-01
In this paper, using a (2 +1 )-dimensional field theory approach, we study the Aharonov-Bohm (AB) scattering with Lorentz symmetry breaking. We obtain the modified scattering amplitude to the AB effect due to the small Lorentz violation correction in the breaking parameter and prove that up to one loop the model is free from ultraviolet divergences.
Some radiative corrections to neutrino scattering: Neutral currents
Jenkins, James P.; Goldman, T.
2009-09-01
With the advent of high precision neutrino scattering experiments comes the need for improved radiative corrections. We present a phenomenological analysis of some contributions to the production of photons in neutrino neutral current scattering that are relevant to experiments subsuming the 1% level of accuracy.
Low dose scatter correction for digital chest tomosynthesis
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2015-03-01
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
Thickness-dependent scatter correction algorithm for digital mammography
NASA Astrophysics Data System (ADS)
Gonzalez Trotter, Dinko E.; Tkaczyk, J. Eric; Kaufhold, John; Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
We have implemented a scatter-correction algorithm (SCA) for digital mammography based on an iterative restoration filter. The scatter contribution to the image is modeled by an additive component that is proportional to the filtered unattenuated x-ray photon signal and dependent on the characteristics of the imaged object. The SCA's result is closer to the scatter-free signal than when a scatter grid is used. Presently, the SCA shows improved contrast-to-noise performance relative to the scatter grid for a breast thickness up to 3.6 cm, with potential for better performance up to 6 cm. We investigated the efficacy of our scatter-correction method on a series of x-ray images of anthropomorphic breast phantoms with maximum thicknesses ranging from 3.0 cm to 6.0 cm. A comparison of the scatter-corrected images with the scatter-free signal acquired using a slit collimator shows average deviations of 3 percent or less, even in the edge region of the phantoms. These results indicate that the SCA is superior to a scatter grid for 2D quantitative mammography applications, and may enable 3D quantitative applications in X-ray tomosynthesis.
Solving outside-axial-field-of-view scatter correction problem in PET via digital experimentation
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Zhu, Yang-Ming; Ye, Jinghan; Song, Xiyun; Hu, Zhiqiang
2016-03-01
Unaccounted scatter impact from unknown outside-axial-field-of-view (outside-AFOV) activity in PET is an important degrading factor for image quality and quantitation. Resource consuming and unpopular way to account for the outside- AFOV activity is to perform an additional PET/CT scan of adjacent regions. In this work we investigate a solution to the outside-AFOV scatter problem without performing a PET/CT scan of the adjacent regions. The main motivation for the proposed method is that the measured random corrected prompt (RCP) sinogram in the background region surrounding the measured object contains only scattered events, originating from both inside- and outside-AFOV activity. In this method, the scatter correction simulation searches through many randomly-chosen outside-AFOV activity estimates along with known inside-AFOV activity, generating a plethora of scatter distribution sinograms. This digital experimentation iterates until a decent match is found between a simulated scatter sinogram (that include supposed outside-AFOV activity) and the measured RCP sinogram in the background region. The combined scatter impact from inside- and outside-AFOV activity can then be used for scatter correction during final image reconstruction phase. Preliminary results using measured phantom data indicate successful phantom length estimate with the method, and, therefore, accurate outside-AFOV scatter estimate.
Proximity corrected accurate in-die registration metrology
NASA Astrophysics Data System (ADS)
Daneshpanah, M.; Laske, F.; Wagner, M.; Roeth, K.-D.; Czerkas, S.; Yamaguchi, H.; Fujii, N.; Yoshikawa, S.; Kanno, K.; Takamizawa, H.
2014-07-01
193nm immersion lithography is the mainstream production technology for the 20nm and 14nm logic nodes. Multi-patterning of an increasing number of critical layers puts extreme pressure on wafer intra-field overlay, to which mask registration error is a major contributor [1]. The International Technology Roadmap for Semiconductors (ITRS [2]) requests a registration error below 4 nm for each mask of a multi-patterning set forming one layer on the wafer. For mask metrology at the 20nm and 14nm logic nodes, maintaining a precision-to-tolerance (P/T) ratio below 0.25 will be very challenging. Full characterization of mask registration errors in the active area of the die will become mandatory. It is well-known that differences in pattern density and asymmetries in the immediate neighborhood of a feature give rise to apparent shifts in position when measured by optical metrology systems, so-called optical proximity effects. These effects can easily be similar in magnitude to real mask placement errors, and uncorrected can result in mis-qualification of the mask. Metrology results from KLA-Tencor's next generation mask metrology system are reported, applying a model-based algorithm [3] which includes corrections for proximity errors. The proximity corrected, model-based measurements are compared to standard measurements and a methodology presented that verifies the correction performance of the new algorithm.
Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.
Gordon, H R; Castaño, D J
1987-06-01
An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
X-ray scatter correction for cone-beam CT using moving blocker array
NASA Astrophysics Data System (ADS)
Zhu, Lei; Strobel, Norbert; Fahrig, Rebecca
2005-04-01
Scatter correction is an active research topic in cone beam computed tomography (CBCT) because CBCT (especially flat-panel detector (FPD) based) systems have large scatter-to-primary ratios. Scatter produces artifact and contrast reduction, and is difficult to model accurately. Direct measurement using a beam blocker array provides accurate scatter estimates. However, since the blocker array also blocks primary radiation, imaging requires a second (or subsequent) scan without the blocker array in place. This approach is inefficient in terms of scanning time and patient dose. To combine accurate scatter estimation and reconstruction into one single scan, a new approach based on an array of moving blockers has been developed. The blocker array moves from projection to projection, such that every detector pixel is not consecutively blocked during the data acquisition, and the missing primary data in the blocker shadows are estimated by interpolation. Using different blocker array trajectories, the algorithm has been evaluated through software phantom studies using Monte Carlo simulations and image processing techniques. Results show that this approach is able to greatly reduce the effect of scatter in the reconstruction. By properly choosing blocker distance and primary data interpolation method, the mean square error of the reconstructed image decreases from 32.3% to 1.13%, and the induced visual artifacts are significantly reduced when a raster-scanning blocker array trajectory is used. Further analysis also shows that artifact arises mostly due to inaccurate scatter estimates, rather than due to interpolation of the primary data.
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Hadron mass corrections in semi-inclusive deep inelastic scattering
A. Accardi, T. Hobbs, W. Melnitchouk
2009-11-01
We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron $h$. The hadron mass correction is made by introducing a generalized, finite-$Q^2$ scaling variable $\\zeta_h$ for the hadron fragmentation function, which approaches the usual energy fraction $z_h = E_h/\
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic
Mie scatter corrections in single cell infrared microspectroscopy.
Konevskikh, Tatiana; Lukacs, Rozalia; Blümel, Reinhold; Ponossov, Arkadi; Kohler, Achim
2016-06-23
Strong Mie scattering signatures hamper the chemical interpretation and multivariate analysis of the infrared microscopy spectra of single cells and tissues. During recent years, several numerical Mie scatter correction algorithms for the infrared spectroscopy of single cells have been published. In the paper at hand, we critically reviewed existing algorithms for the correction of Mie scattering and suggest improvements. We developed an iterative algorithm based on Extended Multiplicative Scatter Correction (EMSC), for the retrieval of pure absorbance spectra from highly distorted infrared spectra of single cells. The new algorithm uses the van de Hulst approximation formula for the extinction efficiency employing a complex refractive index. The iterative algorithm involves the establishment of an EMSC meta-model. While existing iterative algorithms for the correction of resonant Mie scattering employ three independent parameters for establishing a meta-model, we could decrease the number of parameters from three to two independent parameters, which reduced the calculation time for the Mie scattering curves for the iterative EMSC meta-model by a factor of 10. Moreover, by employing the Hilbert transform for evaluating the Kramers-Kronig relations based on a FFT algorithm in Matlab, we further improved the speed of the algorithm by a factor of 100. For testing the algorithm we simulate distorted apparent absorbance spectra by utilizing the exact theory for the scattering of infrared light at absorbing spheres, taking into account the high numerical aperture of infrared microscopes employed for the analysis of single cells and tissues. In addition, the algorithm was applied to measured absorbance spectra of single lung cancer cells. PMID:27034998
Quantum error correction of photon-scattering errors
NASA Astrophysics Data System (ADS)
Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee
2011-05-01
Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.
Radiative corrections to real and virtual muon Compton scattering revisited
NASA Astrophysics Data System (ADS)
Kaiser, N.
2010-06-01
We calculate in closed analytical form the one-photon loop radiative corrections to muon Compton scattering μγ→μγ. Ultraviolet and infrared divergences are both treated in dimensional regularization. Infrared finiteness of the (virtual) radiative corrections is achieved (in the standard way) by including soft photon radiation below an energy cut-off λ. We find that the anomalous magnetic moment α/2π provides only a very small portion of the full radiative corrections. Furthermore, we extend our calculation of radiative corrections to the muon-nucleus bremsstrahlung process (or virtual muon Compton scattering μγ0∗→μγ). These results are particularly relevant for analyzing the COMPASS experiment at CERN in which muon-nucleus bremsstrahlung serves to calibrate the Primakoff scattering of high-energy pions off a heavy nucleus with the aim of measuring the pion electric and magnetic polarizabilities. We find agreement with an earlier calculation of these radiative corrections based on a different method.
Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU.
Kim, Kyung Sang; Ye, Jong Chul
2011-08-01
Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125× and 141× for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT. PMID:21772080
Correcting for Interstellar Scattering Delay in High-precision Pulsar Timing: Simulation Results
NASA Astrophysics Data System (ADS)
Palliyaguru, Nipuni; Stinebring, Daniel; McLaughlin, Maura; Demorest, Paul; Jones, Glenn
2015-12-01
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse "jitter" is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.
Radiative corrections to polarization observables in electron-proton scattering
NASA Astrophysics Data System (ADS)
Borisyuk, Dmitry; Kobushkin, Alexander
2014-08-01
We consider radiative corrections to polarization observables in elastic electron-proton scattering, in particular, for the polarization transfer measurements of the proton form factor ratio μGE/GM. The corrections are of two types: two-photon exchange (TPE) and bremsstrahlung (BS); in the present work we pay special attention to the latter. Assuming small missing energy or missing mass cutoff, the correction can be represented in a model-independent form, with both electron and proton radiation taken into account. Numerical calculations show that the contribution of the proton radiation is not negligible. Overall, at high Q2 and energies, the total correction to μGE/GM grows, but is dominated by TPE. At low energies both TPE and BS may be significant; the latter amounts to ˜0.01 for some reasonable cut-off choices.
Lowest order QED radiative corrections to longitudinally polarized Moeller scattering
Ilyichev, A.; Zykunov, V.
2005-08-01
The total lowest-order electromagnetic radiative corrections to the observables in Moeller scattering of longitudinally polarized electrons have been calculated. The final expressions obtained by the covariant method for the infrared divergency cancellation are free from any unphysical cut-off parameters. Since the calculation is carried out within the ultrarelativistic approximation our result has a compact form that is convenient for computing. Basing on these expressions the FORTRAN code MERA has been developed. Using this code the detailed numerical analysis performed under SLAC (E-158) and JLab kinematic conditions has shown that the radiative corrections are significant and rather sensitive to the value of the missing mass (inelasticity) cuts.
NLO QCD corrections to graviton induced deep inelastic scattering
NASA Astrophysics Data System (ADS)
Stirling, W. J.; Vryonidou, E.
2011-06-01
We consider Next-to-Leading-Order QCD corrections to ADD graviton exchange relevant for Deep Inelastic Scattering experiments. We calculate the relevant NLO structure functions by calculating the virtual and real corrections for a set of graviton interaction diagrams, demonstrating the expected cancellation of the UV and IR divergences. We compare the NLO and LO results at the centre-of-mass energy relevant to HERA experiments as well as for the proposed higher energy lepton-proton collider, LHeC, which has a higher fundamental scale reach.
X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library
Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis
2014-01-01
Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency
Bootsma, G. J.; Verhaegen, F.; Jaffray, D. A.
2015-01-15
suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y. |; Eberl, S.; Nakamura, T.
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Correction of optical absorption and scattering variations in laser speckle rheology measurements
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2014-01-01
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications. PMID:24663983
Correction of optical absorption and scattering variations in Laser Speckle Rheology measurements.
Hajjarian, Zeinab; Nadkarni, Seemantini K
2014-03-24
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications. PMID:24663983
Multiple-scattering corrections to the Beer-Lambert law
Zardecki, A.
1983-01-01
The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled.
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
NASA Astrophysics Data System (ADS)
Robinson, Andrew P.; Tipping, Jill; Cullen, David M.; Hamilton, David
2016-07-01
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for {{}1 7 7} Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities (≤slant 200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification.
Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David
2016-07-21
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for [Formula: see text]Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities ([Formula: see text]200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification. PMID:27351914
Single-scan scatter correction for cone-beam CT using a stationary beam blocker: a preliminary study
NASA Astrophysics Data System (ADS)
Niu, Tianye; Zhu, Lei
2011-03-01
The performance of cone-beam CT (CBCT) is greatly limited by scatter artifacts. The existing measurement-based methods have promising advantages as a standard scatter correction solution, except that they currently require multiple scans or moving the beam blocker during data acquisition to compensate for the missing primary data. These approaches are therefore unpractical in clinical applications. In this work, we propose a new measurement-based scatter correction method to achieve accurate reconstruction with one single scan and a stationary beam blocker, two seemingly incompatible features which enable simple and effective scatter correction without increase of scan time or patient dose. Based on CT reconstruction theory, we distribute the blocked areas over one projection where primary signals are considered to be redundant in a full scan. The CT image quality is not degraded even with primary loss. Scatter is accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction. In a Monte Carlo simulation study, we first optimize the beam blocker geometry using projections on the Shepp-Logan phantom and then carry out a complete simulation of a CBCT scan on a water phantom. With the scatter-to-primary ratio around 1.0, our method reduces the CT number error from 293 to 2.9 Hounsfield unit (HU) around the phantom center. The proposed approach is further evaluated on a CBCT tabletop system. On the Catphan©600 phantom, the reconstruction error is reduced from 202 to 10 HU in the selected region of interest after the proposed correction.
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.
1990-01-01
The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.
Patient-specific scatter correction for flat-panel detector-based cone-beam CT imaging
NASA Astrophysics Data System (ADS)
Zhao, Wei; Brunner, Stephen; Niu, Kai; Schafer, Sebastian; Royalty, Kevin; Chen, Guang-Hong
2015-02-01
A patient-specific scatter correction algorithm is proposed to mitigate scatter artefacts in cone-beam CT (CBCT). The approach belongs to the category of convolution-based methods in which a scatter potential function is convolved with a convolution kernel to estimate the scatter profile. A key step in this method is to determine the free parameters introduced in both scatter potential and convolution kernel using a so-called calibration process, which is to seek for the optimal parameters such that the models for both scatter potential and convolution kernel is able to optimally fit the previously known coarse estimates of scatter profiles of the image object. Both direct measurements and Monte Carlo (MC) simulations have been proposed by other investigators to achieve the aforementioned rough estimates. In the present paper, a novel method has been proposed and validated to generate the needed coarse scatter profile for parameter calibration in the convolution method. The method is based upon an image segmentation of the scatter contaminated CBCT image volume, followed by a reprojection of the segmented image volume using a given x-ray spectrum. The reprojected data is subtracted from the scatter contaminated projection data to generate a coarse estimate of the needed scatter profile used in parameter calibration. The method was qualitatively and quantitatively evaluated using numerical simulations and experimental CBCT data acquired on a clinical CBCT imaging system. Results show that the proposed algorithm can significantly reduce scatter artefacts and recover the correct CT number. Numerical simulation results show the method is patient specific, can accurately estimate the scatter, and is robust with respect to segmentation procedure. For experimental and in vivo human data, the results show the CT number can be successfully recovered and anatomical structure visibility can be significantly improved.
One-loop Electroweak Radiative Corrections for Polarized Møller Scattering
NASA Astrophysics Data System (ADS)
Barkanova, Svetlana; Aleksejevs, Aleksandrs; Ilyichev, Alexander; Kolomensky, Yury; Zykunov, Vladimir
2011-04-01
Møller scattering measurements are a clean, powerful probe of new physics effects. However, before physics of interest can be extracted from the experimental data, radiative corrections must be taken into account very carefully. Using two different approaches, we perform updated and detailed calculations of the complete one-loop set of electroweak radiative corrections to parity violating electron-electron scattering asymmetry at low energies relevant for the ultra-precise 11 GeV MOLLER experiment planned at JLab. Although contributions from some of the self-energies and vertex diagrams calculated in the two approaches can differ significantly, our full gauge-invariant set still guarantees that the total relative weak corrections are in excellent agreement for the two methods of calculation. Our numerical results are presented for a range of experimental cuts and the relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies
2014-01-01
We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys.1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616
A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies.
Truchon, Jean-François; Pettitt, B Montgomery; Labute, Paul
2014-03-11
We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys. 1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU
Fullerton, G D; Keener, C R; Cameron, I L
1994-12-01
The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-01
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e- e- --> e- e- (gamma) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
NASA Astrophysics Data System (ADS)
Singh, Malkiat; Bettenhausen, Michael H.
2011-08-01
Faraday rotation changes the polarization plane of linearly polarized microwaves which propagate through the ionosphere. To correct for ionospheric polarization error, it is necessary to have electron density profiles on a global scale that represent the ionosphere in real time. We use raytrace through the combined models of ionospheric conductivity and electron density (ICED), Bent, and Gallagher models (RIBG model) to specify the ionospheric conditions by ingesting the GPS data from observing stations that are as close as possible to the observation time and location of the space system for which the corrections are required. To accurately calculate Faraday rotation corrections, we also utilize the raytrace utility of the RIBG model instead of the normal shell model assumption for the ionosphere. We use WindSat data, which exhibits a wide range of orientations of the raypath and a high data rate of observations, to provide a realistic data set for analysis. The standard single-shell models at 350 and 400 km are studied along with a new three-shell model and compared with the raytrace method for computation time and accuracy. We have compared the Faraday results obtained with climatological (International Reference Ionosphere and RIBG) and physics-based (Global Assimilation of Ionospheric Measurements) ionospheric models. We also study the impact of limitations in the availability of GPS data on the accuracy of the Faraday rotation calculations.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
NASA Astrophysics Data System (ADS)
Chen, Jingyi; Zebker, Howard A.; Knight, Rosemary
2015-11-01
Interferometric synthetic aperture radar (InSAR) is a radar remote sensing technique for measuring surface deformation to millimeter-level accuracy at meter-scale resolution. Obtaining accurate deformation measurements in agricultural regions is difficult because the signal is often decorrelated due to vegetation growth. We present here a new algorithm for retrieving InSAR deformation measurements over areas with severe vegetation decorrelation using adaptive phase interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. We apply this algorithm to L-band ALOS interferograms acquired over the San Luis Valley, Colorado, and the Tulare Basin, California. In both areas, the pumping of groundwater for irrigation results in deformation of the land that can be detected using InSAR. We show that the PS-based algorithm can significantly reduce the artifacts due to vegetation decorrelation while preserving the deformation signature.
Accurate solution of the proton-hydrogen three-body scattering problem
NASA Astrophysics Data System (ADS)
Abdurakhmanov, I. B.; Kadyrov, A. S.; Bray, I.
2016-02-01
An accurate solution to the fundamental three-body problem of proton-hydrogen scattering including direct scattering and ionization, electron capture and electron capture into the continuum (ECC) is presented. The problem has been addressed using a quantum-mechanical two-center convergent close-coupling approach. At each energy the internal consistency of the solution is demonstrated with the help of single-center calculations, with both approaches converging independently to the same electron-loss cross section. This is the sum of the electron capture, ECC and direct ionization cross sections, which are only obtainable separately in the solution of the problem using the two-center expansion. Agreement with experiment for the electron-capture cross section is excellent. However, for the ionization cross sections some discrepancy exists. Given the demonstrated internal consistency we remain confident in the provided theoretical solution.
Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery
Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.
2009-01-01
Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output. PMID:19610306
Hong Xinguo; Hao Quan
2009-01-15
In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Scatter correction method for cone-beam CT based on interlacing-slit scan
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Zhang, Hua; Shi, Yi-Kai; Zhang, Liang; Xu, Zhe
2014-09-01
Cone-beam computed tomography (CBCT) has the notable features of high efficiency and high precision, and is widely used in areas such as medical imaging and industrial non-destructive testing. However, the presence of the ray scatter reduces the quality of CT images. By referencing the slit collimation approach, a scatter correction method for CBCT based on the interlacing-slit scan is proposed. Firstly, according to the characteristics of CBCT imaging, a scatter suppression plate with interlacing slits is designed and fabricated. Then the imaging of the scatter suppression plate is analyzed, and a scatter correction calculation method for CBCT based on the image fusion is proposed, which can splice out a complete set of scatter suppression projection images according to the interlacing-slit projection images of the left and the right imaging regions in the scatter suppression plate, and simultaneously complete the scatter correction within the flat panel detector (FPD). Finally, the overall process of scatter suppression and correction is provided. The experimental results show that this method can significantly improve the clarity of the slice images and achieve a good scatter correction.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Characterization of image quality for 3D scatter-corrected breast CT images
NASA Astrophysics Data System (ADS)
Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.
2011-03-01
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Complete calculation of electroweak corrections for polarized Møller scattering at high energies
NASA Astrophysics Data System (ADS)
Zykunov, V. A.
2009-09-01
A complete calculation of electroweak radiative corrections to observables of polarized Møller scattering at high energies was performed. This calculation took explicitly into account contributions caused by hard bremsstrahlung. A FORTRAN code that permitted including radiative corrections to high-energy Møller scattering under arbitrary electron-detection conditions was written. It was shown that the electroweak corrections caused by hard bremsstrahlung were rather strongly dependent on the choice of experimental cuts and changed substantially the polarization asymmetry in the region of high energies and over a broad interval of scattering angles.
Rescattering corrections and self-consistent metric in planckian scattering
NASA Astrophysics Data System (ADS)
Ciafaloni, M.; Colferai, D.
2014-10-01
Starting from the ACV approach to transplanckian scattering, we present a development of the reduced-action model in which the (improved) eikonal representation is able to describe particles' motion at large scattering angle and, furthermore, UV-safe (regular) rescattering solutions are found and incorporated in the metric. The resulting particles' shock-waves undergo calculable trajectory shifts and time delays during the scattering process — which turns out to be consistently described by both action and metric, up to relative order R 2 /b 2 in the gravitational radius over impact parameter expansion. Some suggestions about the role and the (re)scattering properties of irregular solutions — not fully investigated here — are also presented.
NASA Astrophysics Data System (ADS)
Cheng, Ju-Chieh Kevin; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-04-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies. This work was supported by the Canadian Institute of Health Research, a TRIUMF Life Science Grant, the Natural Sciences and Engineering Research Council of Canada UFA (V Sossi) and the Michael Smith Foundation for Health Research Scholarship (V Sossi).
NASA Technical Reports Server (NTRS)
Boughner, Robert E.
1986-01-01
A method for calculating the photodissociation rates needed for photochemical modeling of the stratosphere, which includes the effects of molecular scattering, is described. The procedure is based on Sokolov's method of averaging functional correction. The radiation model and approximations used to calculate the radiation field are examined. The approximated diffuse fields and photolysis rates are compared with exact data. It is observed that the approximate solutions differ from the exact result by 10 percent or less at altitudes above 15 km; the photolysis rates differ from the exact rates by less than 5 percent for altitudes above 10 km and all zenith angles, and by less than 1 percent for altitudes above 15 km.
Use of beam stoppers to correct random and scatter coincidence in PET: A Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chuang, Keh-Shih; Lu, Cheng-Chang; Ni, Yu-Ching; Jan, Meei-Ling
2013-05-01
3D acquisition of positron emission tomography (PET) produce data with improved signal-to-noise ratios compared with conventional 2D PET. However, the sensitivity increase is accompanied by an increase in the number of scattered photons and random coincidences detected. Scatter and random coincidence lead to a loss in image contrast and degrade the accuracy of quantitative analysis. This work examines the feasibility of using beam stoppers (BS) for correcting scatter and random coincidence simultaneously. The origins of the photons are not on the path of non-true event. Therefore, a BS placed on the line of response (LOR) that passes through the source position absorbs a particular fraction of the true events but has little effect on the scatter and random events. The subtraction of the two scanned data, with and without BS, can be employed to estimate the non-true events at the LOR. Monte Carlo (MC) simulations of 3D PET on an EEC phantom and a Zubal Phantom are conducted to validate the proposed approach. Both scattered and random coincidences can be estimated and corrected using the proposed method. The mean squared errors measured on the random+scatter sinogram of the phantom obtained by the proposed method are much less than those obtained using the conventional correction method (the delayed coincidence subtraction for random correction combined with single scatter simulation for scatter correction). Preliminary results indicate that the proposed method is feasible for clinical application.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope. PMID:22501044
Increasing the imaging depth through computational scattering correction (Conference Presentation)
NASA Astrophysics Data System (ADS)
Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis
2016-03-01
Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
NASA Astrophysics Data System (ADS)
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality. PMID:25615859
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-09-15
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo
2015-11-15
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the
NASA Astrophysics Data System (ADS)
Yue, Meghan L.; Boden, Adam E.; Sabol, John M.
2009-02-01
In addition to causing loss of contrast and blurring in an image, scatter also makes quantitative measurements of xray attenuation impossible. Many devices, methods, and models have been developed to eliminate, estimate, and correct for the effects of scatter. Although these techniques can reduce the impact of scatter in a large-area image, no methods have proven to be practical and sufficient to enable quantitative analysis of image data in a routine clinical setting. This paper describes a method of scatter correction which uses moderate x-ray collimation in combination with a correction algorithm operating on data obtained from large-area flat panel detector images. The method involves acquiring slot collimated images of the object, and utilizing information from outside of the collimated region, in addition to a priori data, to estimate the scatter within the collimated region. This method requires no increase dose to the patient while providing high image quality and accurate estimates of the primary x-ray data. This scatter correction technique was validated through beam stop experiments and comparison of theoretically calculated and measured contrast of thin aluminum and polymethylmethacrelate objects. Measurements taken with various background material thicknesses, both with and without a grid, showed that the slot-scatter corrected contrast and the theoretical contrast were not significantly different given a 99% confidence interval. However, the uncorrected contrast was found to be significantly different from the corrected and theoretical contrasts. These findings indicate that this method of scatter correction can eliminate the effect of scatter on contrast and potentially enable quantitative x-ray imaging.
NASA Astrophysics Data System (ADS)
Jarry, G.; Graham, S. A.; Jaffray, D. A.; Moseley, D. J.; Verhaegen, F.
2006-03-01
In this work Monte Carlo (MC) simulations are used to correct kilovoltage (kV) cone-beam computed tomographic (CBCT) projections for scatter radiation. All images were acquired using a kV CBCT bench-top system composed of an x-ray tube, a rotation stage and a flat-panel imager. The EGSnrc MC code was used to model the system. BEAMnrc was used to model the x-ray tube while a modified version of the DOSXYZnrc program was used to transport the particles through various phantoms and score phase space files with identified scattered and primary particles. An analytical program was used to read the phase space files and produce image files. The scatter correction was implemented by subtracting Monte Carlo predicted scatter distribution from measured projection images; these projection images were then reconstructed. Corrected reconstructions showed an important improvement in image quality. Several approaches to reduce the simulation time were tested. To reduce the number of simulated scatter projections, the effect of varying the projection angle on the scatter distribution was evaluated for different geometries. It was found that the scatter distribution does not vary significantly over a 30-degree interval for the geometries tested. It was also established that increasing the size of the voxels in the voxelized phantom does not affect the scatter distribution but reduces the simulation time. Different techniques to smooth the scatter distribution were also investigated.
Scatter correction in scintillation camera imaging of positron-emitting radionuclides
Ljungberg, M.; Danfelter, M.; Strand, S.E.
1996-12-31
The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.
NASA Astrophysics Data System (ADS)
Kim, Y.; Kim, H.; Park, H.; Choi, J.; Choi, Y.
2014-03-01
Digital breast tomosynthesis (DBT) is a technique developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, the x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach for scatter reduction technique is a beam-stop-array (BSA) algorithm while this method has a concern of additional exposure to acquire the scatter distribution. The compressed breast is roughly symmetry and the scatter profiles from projection acquired at axially opposite angle are similar to mirror image from each other. The purpose of this study was to apply the BSA algorithm acquiring only two scans with a beam stop array, which estimates scatter distribution with minimum additional exposure. The results of scatter correction with angular interpolation were comparable to those of scatter correction with all scatter distributions at each angle and exposure increase was less than 13%. This study demonstrated the influence of scatter correction by BSA algorithm with minimum exposure which indicates the practical application in clinical situations.
Constrained γZ correction to parity-violating electron scattering
Hall, N. L.; Thomas, A. W.; Young, R. D.; Blunden, P. G.; Melnitchouk, W.
2013-11-07
We update the calculation of γZ interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Constrained {gamma}Z correction to parity-violating electron scattering
Hall, Nathan Luk; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-11-01
We update the calculation of {gamma}Z interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Gerasimov, R. E. Fadin, V. S.
2015-01-15
An analysis of approximations used in calculations of radiative corrections to electron-proton scattering cross section is presented. We investigate the difference between the relatively recent Maximon and Tjon result and the Mo and Tsai result, which was used in the analysis of experimental data. We also discuss the proton form factors ratio dependence on the way we take into account radiative corrections.
Schoen, K.; Snow, W. M.; Kaiser, H.; Werner, S. A.
2005-01-01
The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10−3 fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment. PMID:27308132
NASA Astrophysics Data System (ADS)
Tomalak, O.; Vanderhaeghen, M.
2016-01-01
We evaluate the two-photon exchange (TPE) correction to the unpolarized elastic electron-proton scattering at small momentum transfer Q2 . We account for the inelastic intermediate states approximating the double virtual Compton scattering by the unpolarized forward virtual Compton scattering. The unpolarized proton structure functions are used as input for the numerical evaluation of the inelastic contribution. Our calculation reproduces the leading terms in the Q2 expansion of the TPE correction and goes beyond this approximation by keeping the full Q2 dependence of the proton structure functions. In the range of small momentum transfer, our result is in good agreement with the empirical TPE fit to existing data.
Ouyang, L; Yan, H; Jia, X; Jiang, S; Wang, J; Zhang, H
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated by Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S.; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT.
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e^{–} e^{–} → e^{–} e^{–} (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e– e– → e– e– (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Evaluation of low contrast detectability after scatter correction in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Michielsen, Koen; Fieselmann, Andreas; Cockmartin, Lesley; Nuyts, Johan
2014-03-01
Projection images from digital breast tomosynthesis acquisitions can contain a large fraction of scattered x-rays due to the absence of an anti-scatter grid in front of the detector. In order to produce quantitative results, this should be accounted for in reconstruction algorithms. We examine the possible improvement in signal difference to noise ratio (SDNR) for low contrast spherical densities when applying a scatter correction algorithm. Hybrid patient data were created by combining real patient data with attenuation profiles of spherical masses acquired with matching exposure settings. Scatter in these cases was estimated using Monte-Carlo based scatter- ing kernels. All cases were reconstructed using filtered backprojection (FBP) with and without beam hardening correction and two maximum likelihood methods for transmission tomography, with and without quadratic smoothing prior (MAPTR and MLTR). For all methods, images were reconstructed without scatter correction, and with scatter precorrection, and for the iterative methods also with an adjusted update step obtained by including scatter in the physics model. SDNR of the inserted spheres was calculated by subtracting the recon- structions with and without inserted template to measure the signal difference, while noise was measured in the image containing the template. SDNR was significantly improved by 3.5% to 4.5% (p < 0.0001) at iteration 10 for both correction methods applied to the MLTR and MAPTR reconstructions. For MLTR these differences disappeared by iteration 100. For regular FBP SDNR remained the same after correction (p = 0.60) while it dropped slightly for FBP with beam hardening correction (-1.4%, p = 0.028). These results indicate that for the iterative methods, application of a scatter correction algorithm has very little effect on the SDNR, it only causes a slight decrease in convergence speed, which is similar for precorrection and correction incorporated in the update step. The FBP results
Library-based scatter correction for dedicated cone beam breast CT: a feasibility study
NASA Astrophysics Data System (ADS)
Shi, Linxi; Vedantham, Srinivasan; Karellas, Andrew; Zhu, Lei
2016-04-01
Purpose: Scatter errors are detrimental to cone-beam breast CT (CBBCT) accuracy and obscure the visibility of calcifications and soft-tissue lesions. In this work, we propose practical yet effective scatter correction for CBBCT using a library-based method and investigate its feasibility via small-group patient studies. Method: Based on a simplified breast model with varying breast sizes, we generate a scatter library using Monte-Carlo (MC) simulation. Breasts are approximated as semi-ellipsoids with homogeneous glandular/adipose tissue mixture. On each patient CBBCT projection dataset, an initial estimate of scatter distribution is selected from the pre-computed scatter library by measuring the corresponding breast size on raw projections and the glandular fraction on a first-pass CBBCT reconstruction. Then the selected scatter distribution is modified by estimating the spatial translation of the breast between MC simulation and the clinical scan. Scatter correction is finally performed by subtracting the estimated scatter from raw projections. Results: On two sets of clinical patient CBBCT data with different breast sizes, the proposed method effectively reduces cupping artifact and improves the image contrast by an average factor of 2, with an efficient processing time of 200ms per conebeam projection. Conclusion: Compared with existing scatter correction approaches on CBBCT, the proposed library-based method is clinically advantageous in that it requires no additional scans or hardware modifications. As the MC simulations are pre-computed, our method achieves a high computational efficiency on each patient dataset. The library-based method has shown great promise as a practical tool for effective scatter correction on clinical CBBCT.
Measurement-based scatter correction for cone-beam CT in radiation therapy
NASA Astrophysics Data System (ADS)
Zhu, Lei; Xing, Lei
2009-02-01
Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy. However, as compared to conventional CT, the degraded image quality of CBCT hampers its applications in radiation therapy. Due to the large volume of x-ray illumination, scatter is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in the literature, while drawbacks still exist. In this work, we propose a correction algorithm which is particularly useful in radiation therapy. Since the same patient is scanned repetitively during one radiation treatment course, we measure the scatter distribution in one scan, and use the measured scatter distribution to estimate and correct scatter in the following scans. A partially blocked CBCT is used in the scatter measurement scan. The x-ray beam blocker has a strip pattern, such that the whole-field scatter distribution can be estimated from the detected signals in the shadow region and the patient rigid transformation can be determined from the reconstructed image using the illuminated detector projection data. From the derived patient transformation, the measured scatter is then modified accordingly and used for scatter correction in the following regular CBCT scans. The proposed method has been evaluated using Monte Carlo simulations and physical experiments on an anthropomorphic chest phantom. The results show a significant suppression of scatter artifacts using the proposed method. Using the reconstruction in a narrow collimator geometry as a reference, the comparison also shows that the proposed method reduces reconstruction error from 13.2% to 3.8%. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy.
Ruehrnschopf, Ernst-Peter; Klingenbeck, Klaus
2011-07-15
Since scattered radiation in cone-beam volume CT implies severe degradation of CT images by quantification errors, artifacts, and noise increase, scatter suppression is one of the main issues related to image quality in CBCT imaging. The aim of this review is to structurize the variety of scatter suppression methods, to analyze the common structure, and to develop a general framework for scatter correction procedures. In general, scatter suppression combines hardware techniques of scatter rejection and software methods of scatter correction. The authors emphasize that scatter correction procedures consist of the main components scatter estimation (by measurement or mathematical modeling) and scatter compensation (deterministic or statistical methods). The framework comprises most scatter correction approaches and its validity also goes beyond transmission CT. Before the advent of cone-beam CT, a lot of papers on scatter correction approaches in x-ray radiography, mammography, emission tomography, and in Megavolt CT had been published. The opportunity to avail from research in those other fields of medical imaging has not yet been sufficiently exploited. Therefore additional references are included when ever it seems pertinent. Scatter estimation and scatter compensation are typically intertwined in iterative procedures. It makes sense to recognize iterative approaches in the light of the concept of self-consistency. The importance of incorporating scatter compensation approaches into a statistical framework for noise minimization has to be underscored. Signal and noise propagation analysis is presented. A main result is the preservation of differential-signal-to-noise-ratio (dSNR) in CT projection data by ideal scatter correction. The objective of scatter compensation methods is the restoration of quantitative accuracy and a balance between low-contrast restoration and noise reduction. In a synopsis section, the different deterministic and statistical methods are
ERIC Educational Resources Information Center
Sheen, Younghee; Wright, David; Moldawa, Anna
2009-01-01
Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…
Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4
NASA Astrophysics Data System (ADS)
Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.
2005-10-01
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.
Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.
Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D
2005-10-21
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance. PMID:16204875
Coherent scattering and matrix correction in bone-lead measurements
NASA Astrophysics Data System (ADS)
Todd, A. C.
2000-07-01
The technique of K-shell x-ray fluorescence of lead in bone has been used in many studies of the health effects of lead. This paper addresses one aspect of the technique, namely the coherent conversion factor (CCF) which converts between the matrix of the calibration standards and those of human bone. The CCF is conventionally considered a constant but is a function of scattering angle, energy and the elemental composition of the matrices. The aims of this study were to quantify the effect on the CCF of several assumptions which may not have been tested adequately and to compare the CCFs for plaster of Paris (the present matrix of calibration standards) and a synthetic apatite matrix. The CCF was calculated, using relativistic form factors, for published compositions of bone, both assumed and assessed compositions of plaster, and the synthetic apatite. The main findings of the study were, first, that impurities in plaster, lead in the plaster or bone matrices, coherent scatter from non-bone tissues and the individual subject's measurement geometry are all minor or negligible effects; and, second, that the synthetic apatite matrix is more representative of bone mineral than is plaster of Paris.
Nilsson, Annica M.; Jonsson, Andreas; Jonsson, Jacob C.; Roos, Arne
2011-03-01
For most integrating sphere measurements, the difference in light distribution between a specular reference beam and a diffused sample beam can result in significant errors. The problem becomes especially pronounced in integrating spheres that include a port for reflectance or diffuse transmittance measurements. The port is included in many standard spectrophotometers to facilitate a multipurpose instrument, however, absorption around the port edge can result in a detected signal that is too low. The absorption effect is especially apparent for low-angle scattering samples, because a significant portion of the light is scattered directly onto that edge. In this paper, a method for more accurate transmittance measurements of low-angle light-scattering samples is presented. The method uses a standard integrating sphere spectrophotometer, and the problem with increased absorption around the port edge is addressed by introducing a diffuser between the sample and the integrating sphere during both reference and sample scan. This reduces the discrepancy between the two scans and spreads the scattered light over a greater portion of the sphere wall. The problem with multiple reflections between the sample and diffuser is successfully addressed using a correction factor. The method is tested for two patterned glass samples with low-angle scattering and in both cases the transmittance accuracy is significantly improved.
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-12-15
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-01-01
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%–66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%–29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%–62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected
Park, Yang-Kyun; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.
2015-01-01
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCTus) and a priori CT-based scatter correction (CBCTap). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCTus, while no HU change was applied to the CBCTap. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CTref) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCTap was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCTus images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CTref, while the CBCTap images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCTap-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms. PMID:26233175
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were
NASA Astrophysics Data System (ADS)
Mahesh, C.; Prakash, Satya; Sathiyamoorthy, V.; Gairola, R. M.
2011-11-01
An Artificial Neural Network (ANN) based technique is proposed for estimating precipitation over Indian land and oceanic regions [30° S - 40° N and 30° E - 120° E] using Scattering Index (SI) and Polarization Corrected Temperature (PCT) derived from Special Sensor Microwave Imager (SSM/I) measurements. This rainfall retrieval algorithm is designed to estimate rainfall using a combination of SSM/I and Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements. For training the ANN, SI and PCT (which signify rain signatures in a better way) calculated from SSM/I brightness temperature are considered as inputs and Precipitation Radar (PR) rain rate as output. SI is computed using 19.35 GHz, 22.235 GHz and 85.5 GHz Vertical channels and PCT is computed using 85.5 GHz Vertical and Horizontal channels. Once the training is completed, the independent data sets (which were not included in the training) were used to test the performance of the network. Instantaneous precipitation estimates with independent test data sets are validated with PR surface rain rate measurements. The results are compared with precipitation estimated using power law based (i) global algorithm and (ii) regional algorithm. Overall results show that ANN based present algorithm shows better agreement with PR rain rate. This study is aimed at developing a more accurate operational rainfall retrieval algorithm for Indo-French Megha-Tropiques Microwave Analysis and Detection of Rain and Atmospheric Structures (MADRAS) radiometer.
NASA Astrophysics Data System (ADS)
Yao, B. A.; Zhang, C. S.; Sheng, C. J.; Peng, Y. L.
2005-07-01
This paper is the continuation of paper [1]. In this paper we further show that the difference between twilight flat field and night sky exposure is mainly due to the existence of scattered light. Like Grundahl and Sorensen, we also made the pinhole images of the 1.56m at the Shanghai Observatory and the 63cm of the Nanjing University to show the existence of scattered light intuitively. Both the 1.56m and the 63cm reflectors have normal designed baffles. Therefore it is the common weakness of all standard designed reflectors having only two baffles mounted in front of the primary and secondary mirrors which are not enough to protect the CCD cameras from scattered light in getting accurate flat fields. It is of great importance to modify the primary mirror baffle of all similar reflectors in order to get more accurate flat fielding.
NASA Astrophysics Data System (ADS)
Bačić, Z.; Kress, J. D.; Parker, G. A.; Pack, R. T.
1990-02-01
Accurate 3D coupled channel calculations for total angular momentum J=0 for the reaction F+H2→HF+H using a realistic potential energy surface are analyzed. The reactive scattering is formulated using the hyperspherical (APH) coordinates of Pack and Parker. The adiabatic basis functions are generated quite efficiently using the discrete variable representation method. Reaction probabilities for relative collision energies of up to 17.4 kcal/mol are presented. To aid in the interpretation of the resonances and quantum structure observed in the calculated reaction probabilities, we analyze the phases of the S matrix transition elements, Argand diagrams, time delays and eigenlifetimes of the collision lifetime matrix. Collinear (1D) and reduced dimensional 3D bending corrected rotating linear model (BCRLM) calculations are presented and compared with the accurate 3D calculations.
NASA Astrophysics Data System (ADS)
Camp, Charles H., Jr.; Lee, Young Jong; Cicerone, Marcus T.
2016-04-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download.
[Correction Method of Atmospheric Scattering Effect Based on Three Spectrum Bands].
Ye, Han-han; Wang, Xian-hua; Jiang, Xin-hua; Bu, Ting-ting
2016-03-01
As a major error of CO2 retrieval, atmospheric scattering effect hampers the application of satellite products. Effect of aerosol and combined effect of aerosol and ground surface are important source of atmospheric scattering, so it needs comprehensive consideration of scattering effect from aerosol and ground surface. Based on the continuum, strong and weak absorption part of three spectrum bands O2-A, CO2 1.6 μm and 2.06 μm, information of aerosol and albedo was analyzed, and improved full physics retrieval method was proposed, which can retrieve aerosol and albedo simultaneously to correct the scattering effect. Simulation study on CO2 error caused by aerosol and ground surface albedo CO2 error by correction method was carried out. CO2 error caused by aerosol optical depth and ground surface albedo can reach up to 8%, and CO2 error caused by different types of aerosol can reach up to 10%, while these two types of error can be controlled within 1% and 2% separately by this correction method, which shows that the method can correct the scattering effect effectively. Through evaluation of the results, the potential of this method for high precision satellite data retrieval is obvious, meanwhile, some problems which need to be noticed in real application were also pointed out. PMID:27400493
Large corrections to high-pT hadron-hadron scattering in QCD
Ellis, R. K.; Furman, M. A.; Haber, H. E.; Hinchliffe, I.
1980-10-01
We have eomputed the first non-trivial QCD corrections to the quark-quark scattering process which contributes to the production of hadrons at large p{sub T} in hadron-hadron collisions. Using quark distribution functions defined in deep inelastic scattering and fragmentation functions defined in one particle inclusive e{sup +}e{sup -} annihilation, we find that the corrections are large. This implies that QCD perturbation theory may not be reliable for large p{sub T} hadron physics.
Methods for correcting microwave scattering and emission measurements for atmospheric effects
NASA Technical Reports Server (NTRS)
Komen, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Algorithms were developed to permit correction of scattering coefficient and brightness temperature for the Skylab S193 Radscat for the effects of cloud attenuation. These algorithms depend upon a measurement of the vertically polarized excess brightness temperature at 50 deg incidence angle. This excess temperature is converted to an equivalent 50 deg attenuation, which may then be used to estimate the horizontally polarized excess brightness temperature and reduced scattering coefficient at 50 deg. For angles other than 50 deg, the correction also requires use of the variation of emissivity with salinity and water temperature.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the
Coulomb corrections to the parameters of the Molière multiple scattering theory
NASA Astrophysics Data System (ADS)
Kuraev, Eduard; Voskresenskaya, Olga; Tarasov, Alexander
2014-06-01
High-energy Coulomb corrections to the parameters of the Molière multiple scattering theory are obtained. Numerical calculations are presented in the range of the nuclear charge number of the target atom 6⪕Z⪕92. It is shown that these corrections have a large value for sufficiently heavy elements of the target material and should be taken into account in describing high-energy experiments with nuclear targets.
Ma, C.; Liescheski, P.B.; Bonham, R.A. )
1989-12-01
In this article we describe an experimental technique to measure the total electron-impact cross section by measurement of the attenuation of an electron beam passing through a gas at constant pressure with the unwanted forward scattering contribution removed. The technique is based on the different spatial propagation properties of scattered and unscattered electrons. The correction is accomplished by measuring the electron beam attenuation dependence on both the target gas pressure (number density) and transmission length. Two extended forms of the Beer--Lambert law which approximately include the contributions for forward scattering and for forward scattering plus multiple scattering from the gas outside the electron beam were developed. It is argued that the dependence of the forward scattering on the path length through the gas is approximately independent of the model used to describe it. The proposed methods were used to determine the total cross section and forward scattering contribution from argon (Ar) with 300-eV electrons. Our results are compared with those in the literature and the predictions of theory and experiment for the forward scattering and multiple scattering contributions. In addition, Monte Carlo simulations were performed as a further test of the method.
Radiative Corrections for Lepton-Proton Scattering: When the Mass Matters
NASA Astrophysics Data System (ADS)
Afanasev, Andrei
2015-04-01
Radiative corrections procedures for electron-proton and muon-proton scattering are well established under the assumption that the leptons are considered in an ultra-relativistic approximation. MUSE experiment at PSI and COMPASS experiment at CERN entered the regions of kinematics where explicit dependence of radiative corrections on the lepton mass becomes important. MUSE will consider the scattering of muons with momenta of the order 100 MeV/c, therefore lepton mass corrections become important for the entire kinematic domain. COMPASS experiment uses scattering of 100 GeV/c muons, and the muon mass effects are especially relevant in the quasi-real photo production limit, Q2 --> 0. A dedicated Monte Carlo generator of radiative events is being developed for MUSE, which also includes effects of interference between the lepton and proton bremsstrahlung. Parts of the radiative corrections are expected to be suppressed for muons due to the larger muon mass. Two-photon exchange corrections are generally expected to be small, and should be similar for electrons and muons. We classify the radiative corrections into two categories, C-even and C-odd under the lepton charge reversal, and discuss their role separately for the above experiments.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533
Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan
2015-03-10
In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
Improving quantitative dosimetry in 177Lu-DOTATATE SPECT by energy window-based scatter corrections
Lagerburg, Vera; Klausen, Thomas L.; Holm, Søren
2014-01-01
Purpose Patient-specific dosimetry of lutetium-177 (177Lu)-DOTATATE treatment in neuroendocrine tumours is important, because uptake differs across patients. Single photon emission computer tomography (SPECT)-based dosimetry requires a conversion factor between the obtained counts and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. Materials and methods 177Lu SPECT images of a phantom with known activity concentration ratio between the uniform background and filled hollow spheres were acquired for three different collimators: low-energy high resolution (LEHR), low-energy general purpose (LEGP) and medium-energy general purpose (MEGP). Counts were collected in several energy windows, and scatter correction was performed by applying different methods such as effective scatter source estimation (ESSE), triple-energy and dual-energy window, double-photopeak window and downscatter correction. The intensity ratio between the spheres and the background was measured and corrected for the partial volume effect and used to compare the performance of the methods. Results Low-energy collimators combined with 208 keV energy windows give rise to artefacts. For the 113 keV energy window, large differences were observed in the ratios for the spheres. For MEGP collimators with the ESSE correction technique, the measured ratio was close to the real ratio, and the differences between spheres were small. Conclusion For quantitative 177Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated and the real ratio is less than 10% for both energy windows. PMID:24525900
NASA Astrophysics Data System (ADS)
Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick
2002-08-01
This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.
QCD CORRECTIONS TO DILEPTON PRODUCTION NEAR PARTONIC THRESHOLD IN PP SCATTERING.
SHIMIZU, H.; STERMAN, G.; VOGELSANG, W.; YOKOYA, H.
2005-10-02
We present a recent study of the QCD corrections to dilepton production near partonic threshold in transversely polarized {bar p}p scattering, We analyze the role of the higher-order perturbative QCD corrections in terms of the available fixed-order contributions as well as of all-order soft-gluon resummations for the kinematical regime of proposed experiments at GSI-FAIR. We find that perturbative corrections are large for both unpolarized and polarized cross sections, but that the spin asymmetries are stable. The role of the far infrared region of the momentum integral in the resummed exponent and the effect of the NNLL resummation are briefly discussed.
Igor Akushevich; Andrei Afanasev; Mykola Merenkov
2001-12-01
The explicit formulae for radiative correction (RC) calculation for elastic ep-scattering is presented. Two typical measurements of polarization observables such as beam-target asymmetry or recoil proton polarization, are considered. Possibilities to take into account realistic experimental acceptances are discussed. The Fortran code MASCARAD for providing the RC procedure is presented. Numerical analysis is done for kinematical conditions of TJNAF.
NASA Astrophysics Data System (ADS)
Afanasev, A.; Akushevich, I.; Merenkov, N.
2001-12-01
Explicit formulas for radiative correction (RC) calculations for elastic ep scattering are presented. Two typical measurements of polarization observables, such as beam-target asymmetry or recoil proton polarization, are considered. The possibilities of taking into account realistic experimental acceptances are discussed. The FORTRAN code MASCARAD for providing the RC procedure is presented. A numerical analysis is done for the kinematical conditions of CEBAF.
QED Radiative Corrections to Asymmetries of Elastic ep-scattering in Hadronic Variables
Alexander Ilyichev; Andrei Afanasev; Igor Akushevich; Mykola Merenkov
2001-08-16
Compact analytical formulae for QED radiative corrections in the processes of elastic e-p scattering are obtained in the case when kinematic variables are reconstructed from the recoil proton momentum measured. Numerical analysis is presented under kinematic conditions of current experiments at JLab.
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
NASA Astrophysics Data System (ADS)
Rosenthal, Yair; Lohmann, George P.
2002-09-01
Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
NASA Astrophysics Data System (ADS)
Takeuchi, Wataru
2013-10-01
Since in impact-collision ion scattering spectroscopy (ICISS) data analysis the interaction potential represented by the screening length as the screening effect is not satisfactorily established up to the present, we introduce commonly the correction factor in the screening length. Previously, Yamamura, Takeuchi and Kawamura (YTK) have suggested the theory taking the shell effect of electron distributions into account for the correction factor to Firsov screening length in the Moliere potential. The application of YTK theory to the evaluation of screening length corrections for the interaction potentials in ICISS manifested that the screening length corrections calculated by the YTK theory agree almost with those determined by simulations or numerical calculations in ICISS and its variants data analyses, being superior to the evaluation of screening length corrections with the O'Connor and Biersack (OB) formula.
Correction for multiple scattering of unpolarized photons in attenuation coefficient measurements
Fernandez, J.E.; Sumini, M.; Satori, R.
1995-01-01
Calculations of the diffusion of unpolarized photons in thin thickness targets have been performed with recourse to a vector transport model taking rigorously into account the polarization introduced by the scattering interactions. An order-of-interactions solution of the Boltzmann transport equation for photons was used to describe the multiple scattering terms due to the prevailing effects in the X-ray regime. An analytical expression for the correction factor to the attenuation coefficient is given in term of the solid angle subtended by the detector and the energy interval characterizing the detection response. Although the main corrections are due to the influence of the pure Rayleigh effect, first- and second-order chains involving the Rayleigh and Compton effects have been considered as possible sources of overlapping contributions to the transmitted intensity. The extent of the corrections is estimated and some examples are given for pure element targets.
Constrained gamma-Z interference corrections to parity-violating electron scattering
Hall, Nathan Luke; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-07-01
We present a comprehensive analysis of gamma-Z interference corrections to the weak charge of the proton measured in parity-violating electron scattering, including a survey of existing models and a critical analysis of their uncertainties. Constraints from parton distributions in the deep-inelastic region, together with new data on parity-violating electron scattering in the resonance region, result in significantly smaller uncertainties on the corrections compared to previous estimates. At the kinematics of the Qweak experiment, we determine the gamma-Z box correction to be Re\\box_{gamma-Z}^V = (5.61 +- 0.36) x 10^{-3}. The new constraints also allow precise predictions to be made for parity-violating deep-inelastic asymmetries on the deuteron.
Correcting errors in the optical path difference in Fourier spectroscopy: a new accurate method.
Kauppinen, J; Kärkköinen, T; Kyrö, E
1978-05-15
A new computational method for calculating and correcting the errors of the optical path difference in Fourier spectrometers is presented. This method only requires an one-sided interferogram and a single well-separated line in the spectrum. The method also cancels out the linear phase error. The practical theory of the method is included, and an example of the progress of the method is illustrated by simulations. The method is also verified by several simulations in order to estimate its usefulness and accuracy. An example of the use of this method in practice is also given. PMID:20198027
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
Accurate measurement of the x-ray coherent scattering form factors of tissues
NASA Astrophysics Data System (ADS)
King, Brian W.
The material dependent x-ray scattering properties of tissues are determined by their scattering form factors, measured as a function of the momentum transfer argument, x. Incoherent scattering form factors, Finc, are calculable for all values of x while coherent scattering form factors, Fcoh, cannot be calculated except at large C because of their dependence on long range order. As a result, measuring Fcoh is very important to the developing field of x-ray scatter imaging. Previous measurements of Fcoh, based on crystallographic techniques, have shown significant variability, as these methods are not optimal for amorphous materials. Two methods of measuring F coh, designed with amorphous materials in mind, are developed in this thesis. An angle-dispersive technique is developed that uses a polychromatic x-ray beam and a large area, energy-insensitive detector. It is shown that Fcoh can be measured in this system if the incident x-ray spectrum is known. The problem is ill-conditioned for typical x-ray spectra and two numerical methods of dealing with the poor conditioning are explored. It is shown that these techniques work best with K-edge filters to limit the spectral width and that the accuracy degrades for strongly ordered materials. Measurements of width Fcoh for water samples are made using 50, 70 and 92 kVp spectra. The average absolute relative difference in Fcoh between our results and the literature for water is approximately 10-15%. Similar measurements for fat samples were made and found to be qualitatively similar to results in the literature, although there is very large variation between the literature values in this case. The angle-dispersive measurement is limited to low resolution measurements of the coherent scattering form factor although it is more accessible than traditional measurements because of the relatively commonplace equipment requirements. An energy-dispersive technique is also developed that uses a polychromatic x-ray beam and an
A Monte Carlo correction for the effect of Compton scattering in 3-D PET brain imaging
Levin, C.S.; Dahlbom, M.; Hoffman, E.J.
1995-08-01
A Monte Carlo simulation has been developed to simulate and correct for the effect of Compton scatter in 3-D acquired PET brain scans. The method utilizes the 3-D reconstructed image volume as the source intensity distribution for a photon-tracking Monte Carlo simulation. It is assumed that the number of events in each pixel of the image represents the isotope concentration at that location in the brain. The history of each annihilation photon`s interactions in the scattering medium is followed, and the sinograms for the scattered and unscattered photon pairs are generated in a simulated 3-D PET acquisition. The calculated scatter contribution is used to correct the original data set. The method is general and can be applied to any scanner configuration or geometry. In its current form the simulation requires 25 hours on a single Sparc 10 CPU when every pixel in a 15-plane, 128 x 128 pixel image volume is sampled, and less than 2 hours when 16 pixels (4 x 4) are grouped as a single pixel. Results of the correction applied to 3-D human and phantom studies are presented.
Two-photon exchange corrections in elastic lepton-proton scattering at small momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
In recent years, elastic electron-proton scattering experiments, with and without polarized protons, gave strikingly different results for the electric over magnetic proton form factor ratio. A mysterious discrepancy (``the proton radius puzzle'') has been observed in the measurement of the proton charge radius in muon spectroscopy experiments versus electron spectroscopy and electron scattering. Two-photon exchange (TPE) contributions are the largest source of the hadronic uncertainty in these experiments. We compare the existing models of the elastic contribution to TPE correction in lepton-proton scattering. A subtracted dispersion relation formalism for the TPE in electron-proton scattering has been developed and tested. Its relative effect on cross section is in the 1 - 2 % range for a low value of the momentum transfer. An alternative dispersive evaluation of the TPE correction to the hydrogen hyperfine splitting was found and applied. For the inelastic TPE contribution, the low momentum transfer expansion was studied. In addition with the elastic TPE it describes the experimental TPE fit to electron data quite well. For a forthcoming muon-proton scattering experiment (MUSE) the resulting TPE was found to be in the 0 . 5 - 1 % range, which is the planned accuracy goal.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Chavez, P.S., Jr.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
A simple scatter correction method for dual energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Lau, Beverly; Hu, Yue-Houng; Zhao, Wei; Gindi, Gene
2014-03-01
Dual-Energy Contrast Enhanced Digital Breast Tomosynthesis (DE-CE-DBT) has the potential to deliver diagnostic information for vascularized breast pathology beyond that available from screening DBT. DE-CE-DBT involves a contrast (iodine) injection followed by a low energy (LE) and a high energy (HE) acquisitions. These undergo weighted subtraction then a reconstruction that ideally shows only the iodinated signal. Scatter in the projection data leads to "cupping" artifacts that can reduce the visibility and quantitative accuracy of the iodinated signal. The use of filtered backprojection (FBP) reconstruction ameliorates these types of artifacts, but the use of FBP precludes the advantages of iterative reconstructions. This motivates an effective and clinically practical scatter correction (SC) method for the projection data. We propose a simple SC method, applied at each acquisition angle. It uses scatter-only data at the edge of the image to interpolate a scatter estimate within the breast region. The interpolation has an approximately correct spatial profile but is quantitatively inaccurate. We further correct the interpolated scatter data with the aid of easily obtainable knowledge of SPR (scatter-to-primary ratio) at a single reference point. We validated the SC method using a CIRS breast phantom with iodine inserts. We evaluated its efficacy in terms of SDNR and iodine quantitative accuracy. We also applied our SC method to a patient DE-CE-DBT study and showed that the SC allowed detection of a previously confirmed tumor at the edge of the breast. The SC method is quick to use and may be useful in a clinical setting.
Germer, Thomas A
2016-09-01
We consider the effect of volume diffusion on measurements of the bidirectional scattering distribution function when a finite distance is used for the solid angle defining aperture. We derive expressions for correction factors that can be used when the reduced scattering coefficients and the index of refraction are known. When these quantities are not known, the expressions can be used to guide the assessment of measurement uncertainty. We find that some measurement geometries reduce the effect of volume diffusion compared to their reciprocal geometries. PMID:27607273
Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng
2014-02-15
We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70 dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π. PMID:24562233
First Order QED Corrections to the Parity-Violating Asymmetry in Moller Scattering
Zykunov, Vladimir A.; Suarez, Juan; Tweedie, Brock A.; Kolomensky, Yury G.; /UC, Berkeley
2005-08-15
We compute a full set of the first order QED corrections to the parity-violating observables in polarized Moeller scattering. We employ a covariant method of removing infrared divergences, computing corrections without introducing any unphysical parameters. When applied to the kinematics of the SLAC E158 experiment, the QED corrections reduce the parity violating asymmetry by 4.5%. We combine our results with the previous calculations of the first-order electroweak corrections and obtain the complete {Omicron}({alpha}) prescription for relating the experimental asymmetry A{sub LR} to the low-energy value of the weak mixing angle sin{sup 2} {theta}{sub W}. Our results are applicable to the recent measurement of A{sub LR} by the SLAC E158 collaboration, as well as to the future parity violation experiments.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Weak charge of the proton: loop corrections to parity-violating electron scattering
Wally Melnitchouk
2011-05-01
I review the role of two-boson exchange corrections to parity-violating elastic electron–proton scattering. Direct calculations of contributions from nucleon and Delta intermediate states show generally small, [script O](1–2%), effects over the range of kinematics relevant for proton strangeness form factor measurements. For the forward angle Qweak experiment at Jefferson Lab, which aims to measure the weak charge of the proton, corrections from the gammaZ box diagram are computed within a dispersive approach and found to be sizable at the E~1 GeV energy scale of the experiment.
Lowest order QED radiative corrections to longitudinally polarized Møller scattering
NASA Astrophysics Data System (ADS)
Ilyichev, A.; Zykunov, V.
2005-08-01
The total lowest-order electromagnetic radiative corrections to the observables in Møller scattering of longitudinally polarized electrons have been calculated. The final expressions obtained by the covariant method for the infrared divergency cancellation are free from any unphysical cut-off parameters. Since the calculation is carried out within the ultrarelativistic approximation our result has a compact form that is convenient for computing. Basing on these expressions the FORTRAN code MERA has been developed. Using this code the detailed numerical analysis performed under SLAC (E-158) and JLab kinematic conditions has shown that the radiative corrections are significant and rather sensitive to the value of the missing mass (inelasticity) cuts.
Andrei Afanasev; Igor Akushevich; Nikolai Merenkov
2004-03-01
The electron structure function method is applied to calculate model-independent radiative corrections to an asymmetry of electron-proton scattering. The representations for both spin-independent and spin-dependent parts of the cross-section are derived. Master formulae take into account the leading corrections in all orders and the main contribution of the second order next-to-leading ones and have accuracy at the level of one per mille. Numerical calculations illustrate our analytical results for both elastic and deep inelastic events.
QED radiative corrections to low-energy Møller and Bhabha scattering
NASA Astrophysics Data System (ADS)
Epstein, Charles S.; Milner, Richard G.
2016-08-01
We present a treatment of the next-to-leading-order radiative corrections to unpolarized Møller and Bhabha scattering without resorting to ultrarelativistic approximations. We extend existing soft-photon radiative corrections with new hard-photon bremsstrahlung calculations so that the effect of photon emission is taken into account for any photon energy. This formulation is intended for application in the OLYMPUS experiment and the upcoming DarkLight experiment but is applicable to a broad range of experiments at energies where QED is a sufficient description.
NASA Technical Reports Server (NTRS)
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
Development of Filtered Rayleigh Scattering for Accurate Measurement of Gas Velocity
NASA Technical Reports Server (NTRS)
Miles, Richard B.; Lempert, Walter R.
1995-01-01
The overall goals of this research were to develop new diagnostic tools capable of capturing unsteady and/or time-evolving, high-speed flow phenomena. The program centers around the development of Filtered Rayleigh Scattering (FRS) for velocity, temperature, and density measurement, and the construction of narrow linewidth laser sources which will be capable of producing an order MHz repetition rate 'burst' of high power pulses.
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Dzuba, V. A.; Flambaum, V. V.; Pospelov, M.; Stadnik, Y. V.
2016-06-01
We revisit the WIMP-type dark matter scattering on electrons that results in atomic ionization and can manifest itself in a variety of existing direct-detection experiments. Unlike the WIMP-nucleon scattering, where current experiments probe typical interaction strengths much smaller than the Fermi constant, the scattering on electrons requires a much stronger interaction to be detectable, which in turn requires new light force carriers. We account for such new forces explicitly, by introducing a mediator particle with scalar or vector couplings to dark matter and to electrons. We then perform state-of-the-art numerical calculations of atomic ionization relevant to the existing experiments. Our goals are to consistently take into account the atomic physics aspect of the problem (e.g., the relativistic effects, which can be quite significant) and to scan the parameter space—the dark matter mass, the mediator mass, and the effective coupling strength—to see if there is any part of the parameter space that could potentially explain the DAMA modulation signal. While we find that the modulation fraction of all events with energy deposition above 2 keV in NaI can be quite significant, reaching ˜50 %, the relevant parts of the parameter space are excluded by the XENON10 and XENON100 experiments.
Fujita, Masahiro; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Zoghbi, Sami S; Seneca, Nicholas; Tipre, Dnyanesh; Seibyl, John P; Innis, Robert B; Iida, Hidehiro
2004-05-01
Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D(2) receptors using [(123)I]epidepride. Eight healthy human subjects [age 30+/-8 (range 22-46) years] participated in a study with a bolus injection of 373+/-12 (354-389) MBq [(123)I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry micro (SC) and without scatter correction using broad-beam micro (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [(18)F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. PMID:14730406
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Valenti, M.
2009-12-01
Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the
Two-photon exchange correction to muon-proton elastic scattering at low momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
We evaluate the two-photon exchange (TPE) correction to the muon-proton elastic scattering at small momentum transfer. Besides the elastic (nucleon) intermediate state contribution, which is calculated exactly, we account for the inelastic intermediate states by expressing the TPE process approximately through the forward doubly virtual Compton scattering. The input in our evaluation is given by the unpolarized proton structure functions and by one subtraction function. For the latter, we provide an explicit evaluation based on a Regge fit of high-energy proton structure function data. It is found that, for the kinematics of the forthcoming muon-proton elastic scattering data of the MUSE experiment, the elastic TPE contribution dominates, and the size of the inelastic TPE contributions is within the anticipated error of the forthcoming data.
Effective-range corrections to three-body recombination for atoms with large scattering length
Hammer, H.-W.; Laehde, Timo A.; Platter, L.
2007-03-15
Few-body systems with large scattering length a have universal properties that do not depend on the details of their interactions at short distances. The rate constant for three-body recombination of bosonic atoms of mass m into a shallow dimer scales as ({Dirac_h}/2{pi})a{sup 4}/m times a log-periodic function of the scattering length. We calculate the leading and subleading corrections to the rate constant, which are due to the effective range of the atoms, and study the correlation between the rate constant and the atom-dimer scattering length. Our results are applied to {sup 4}He atoms as a test case.
Truncation correction for VOI C-arm CT using scattered radiation
NASA Astrophysics Data System (ADS)
Bier, Bastian; Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Xia, Yan; Struffert, Tobias; Hornegger, Joachim
2013-03-01
In C-arm computed tomography, patient dose reduction by volume-of-interest (VOI) imaging is of increasing interest for many clinical applications. A remaining limitation of VOI imaging is the truncation artifact when reconstructing a 3D volume. It can either be cupping towards the boundaries of the field-of-view (FOV) or an incorrect offset in the Hounsfield values of the reconstructed voxels. In this paper, we present a new method for correction of truncation artifacts in a collimated scan. When axial or lateral collimation are applied, scattered radiation still reaches the detector and is recorded outside of the FOV. If the full area of the detector is read out we can use this scattered signal to estimate the truncated part of the object. We apply three processing steps: detection of the collimator edge, adjustment of the area outside the FOV, and interpolation of the collimator edge. Compared to heuristic truncation correction methods we were able to reconstruct high contrast structures like bones outside of the FOV. Inside the FOV we achieved similar reconstruction results as with water cylinder truncation correction. These preliminary results indicate that scattered radiation outside the FOV can be used to improve image quality and further research in this direction seems beneficial.
Compartment modeling of dynamic brain PET—The impact of scatter corrections on parameter errors
Häggström, Ida Karlsson, Mikael; Larsson, Anne; Schmidtlein, C. Ross
2014-11-01
Purpose: The aim of this study was to investigate the effect of scatter and its correction on kinetic parameters in dynamic brain positron emission tomography (PET) tumor imaging. The 2-tissue compartment model was used, and two different reconstruction methods and two scatter correction (SC) schemes were investigated. Methods: The GATE Monte Carlo (MC) software was used to perform 2 × 15 full PET scan simulations of a voxelized head phantom with inserted tumor regions. The two sets of kinetic parameters of all tissues were chosen to represent the 2-tissue compartment model for the tracer 3′-deoxy-3′-({sup 18}F)fluorothymidine (FLT), and were denoted FLT{sub 1} and FLT{sub 2}. PET data were reconstructed with both 3D filtered back-projection with reprojection (3DRP) and 3D ordered-subset expectation maximization (OSEM). Images including true coincidences with attenuation correction (AC) and true+scattered coincidences with AC and with and without one of two applied SC schemes were reconstructed. Kinetic parameters were estimated by weighted nonlinear least squares fitting of image derived time–activity curves. Calculated parameters were compared to the true input to the MC simulations. Results: The relative parameter biases for scatter-eliminated data were 15%, 16%, 4%, 30%, 9%, and 7% (FLT{sub 1}) and 13%, 6%, 1%, 46%, 12%, and 8% (FLT{sub 2}) for K{sub 1}, k{sub 2}, k{sub 3}, k{sub 4}, V{sub a}, and K{sub i}, respectively. As expected, SC was essential for most parameters since omitting it increased biases by 10 percentage points on average. SC was not found necessary for the estimation of K{sub i} and k{sub 3}, however. There was no significant difference in parameter biases between the two investigated SC schemes or from parameter biases from scatter-eliminated PET data. Furthermore, neither 3DRP nor OSEM yielded the smallest parameter biases consistently although there was a slight favor for 3DRP which produced less biased k{sub 3} and K{sub i
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2012-03-01
X-ray scatter is a common cause of image artifacts for cone-beam CT systems due to the expanded field of view and degrades the quantitative accuracy of measured Hounsfield Units (HU). Due to the strong dependency of scatter on the object being scanned, it is crucial to measure the scatter signal for each object. We propose to use a beam pass array (BPA) composed of parallel-holes within a tungsten plate to measure scatter for a dedicated breast CT system. A complete study of the performance of the BPA was conducted. The goal of this study was to explore the feasibility of measuring and compensating for the scatter signal for each individual object. Different clinical study schemes were investigated, including a full rotation scan with BPA and discrete projections acquired with BPA followed by interpolation for full rotation. Different sized cylindrical phantoms and a breast shaped polyethylene phantom were used to test for the robustness of the proposed method. Physically measured scatter signals were converted into scatter to primary ratios (SPRs) at discrete locations through the projection image. A complete noise-free 2D SPR was generated from these discrete measurements. SPR results were compared to Monte Carlo simulation results and scatter corrected CT images were quantitatively evaluated for "cupping" artifact. With the proposed method, a reduction of up to 47 HU of "cupping" was demonstrated. In conclusion, the proposed BPA method demonstrated effective and accurate objectspecific scatter correction with the main advantage of dose-sparing compared to beam stop array (BSA) approaches.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
More accurate X-ray scattering data of deeply supercooled bulk liquid water
Neuefeind, Joerg C; Benmore, Chris J; Weber, Richard; Paschek, Dietmar
2011-01-01
Deeply supercooled water droplets held container-less in an acoustic levitator are investigated with high energy X-ray scattering. The temperature dependence X-ray structure function is found to be non-linear. Comparison with two popular computer models reveals that structural changes are predicted too abrupt by the TIP5P model, while the rate of change predicted by TIP4P is in much better agreement with experiment. The abrupt structural changes predicted by the TIP5P model to occur in the temperature range between 260-240K as water approaches the homogeneous nucleation limit are unrealistic. Both models underestimate the distance between neighbouring oxygen atoms and overestimate the sharpness of the OO distance distribution, indicating that the strength of the H-bond is overestimated in these models.
X-ray scatter correction method for dedicated breast computed tomography
Sechopoulos, Ioannis
2012-05-15
Purpose: To improve image quality and accuracy in dedicated breast computed tomography (BCT) by removing the x-ray scatter signal included in the BCT projections. Methods: The previously characterized magnitude and distribution of x-ray scatter in BCT results in both cupping artifacts and reduction of contrast and accuracy in the reconstructions. In this study, an image processing method is proposed that estimates and subtracts the low-frequency x-ray scatter signal included in each BCT projection postacquisition and prereconstruction. The estimation of this signal is performed using simple additional hardware, one additional BCT projection acquisition with negligible radiation dose, and simple image processing software algorithms. The high frequency quantum noise due to the scatter signal is reduced using a noise filter postreconstruction. The dosimetric consequences and validity of the assumptions of this algorithm were determined using Monte Carlo simulations. The feasibility of this method was determined by imaging a breast phantom on a BCT clinical prototype and comparing the corrected reconstructions to the unprocessed reconstructions and to reconstructions obtained from fan-beam acquisitions as a reference standard. One-dimensional profiles of the reconstructions and objective image quality metrics were used to determine the impact of the algorithm. Results: The proposed additional acquisition results in negligible additional radiation dose to the imaged breast ({approx}0.4% of the standard BCT acquisition). The processed phantom reconstruction showed substantially reduced cupping artifacts, increased contrast between adipose and glandular tissue equivalents, higher voxel value accuracy, and no discernible blurring of high frequency features. Conclusions: The proposed scatter correction method for dedicated breast CT is feasible and can result in highly improved image quality. Further optimization and testing, especially with patient images, is necessary to
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik
2011-08-21
In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559
Correction of radiation absorption on biological samples using Rayleigh to Compton scattering ratio
NASA Astrophysics Data System (ADS)
Pereira, Marcelo O.; Conti, Claudio de Carvalho; dos Anjos, Marcelino J.; Lopes, Ricardo T.
2012-06-01
The aim of this work was to develop a method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E < 30 keV) applied to a biological matrix based on the Rayleigh to Compton scattering ratio and the effective atomic number. For calibration, scattering measurements were performed on standard samples of radiation produced by a gamma-ray source of 241Am (59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 1557B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C), in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples.
Wide angle Compton scattering on the proton: study of power suppressed corrections
NASA Astrophysics Data System (ADS)
Kivel, N.; Vanderhaeghen, M.
2015-10-01
We study the wide angle Compton scattering process on a proton within the soft-collinear factorization (SCET) framework. The main purpose of this work is to estimate the effect due to certain power suppressed corrections. We consider all possible kinematical power corrections and also include the subleading amplitudes describing the scattering with nucleon helicity flip. Under certain assumptions we present a leading-order factorization formula for these amplitudes which includes the hard- and soft-spectator contributions. We apply the formalism and perform a phenomenological analysis of the cross section and asymmetries in the wide angle Compton scattering on a proton. We assume that in the relevant kinematical region where -t,-u>2.5 GeV2 the dominant contribution is provided by the soft-spectator mechanism. The hard coefficient functions of the corresponding SCET operators are taken in the leading-order approximation. The analysis of existing cross section data shows that the contribution of the helicity-flip amplitudes to this observable is quite small and comparable with other expected theoretical uncertainties. We also show predictions for double polarization observables for which experimental information exists.
Sivanesan, Arumugam; Adamkiewicz, Witold; Kalaivani, Govindasamy; Kamińska, Agnieszka; Waluk, Jacek; Hołyst, Robert; Izake, Emad L
2015-01-21
Correction for 'Towards improved precision in the quantification of surface-enhanced Raman scattering (SERS) enhancement factors: a renewed approach' by Arumugam Sivanesan et al., Analyst, 2015, DOI:10.1039/c4an01778a PMID:25453040
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
Implementation of an analytical Raman scattering correction for satellite ocean-color processing.
McKinna, Lachlan I W; Werdell, P Jeremy; Proctor, Christopher W
2016-07-11
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a time-series study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs. PMID:27410899
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; Thomas, Anthony W.
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Q_{weak} experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; Thomas, Anthony W.
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Qweak experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q^{2}. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
{gamma}Z corrections to forward-angle parity-violating ep scattering
Sibirtsev, A.; Blunden, P. G.; Melnitchouk, W.; Thomas, A. W.
2010-07-01
We use dispersion relations to evaluate the {gamma}Z box contribution to parity-violating electron scattering in the forward limit arising from the axial-vector coupling at the electron vertex. The calculation makes full use of the critical constraints from recent JLab data on electroproduction in the resonance region as well as high-energy data from HERA. At the kinematics of the Q{sub weak} experiment, this gives a correction of 0.0047{sub -0.0004}{sup +0.0011} to the standard model value 0.0713(8) of the proton weak charge. While the magnitude of the correction is highly significant, the uncertainty is within the anticipated experimental uncertainty of {+-}0.003.
Gakh, G. I.; Konchatnij, M. I. Merenkov, N. P.
2012-08-15
The model-independent QED radiative corrections to polarization observables in elastic scattering of unpolarized and longitudinally polarized electron beams by a deuteron target are calculated in leptonic variables. The experimental setup when the deuteron target is arbitrarily polarized is considered and the procedure for applying the derived results to the vector or tensor polarization of the recoil deuteron is discussed. The calculation is based on taking all essential Feynman diagrams into account, which results in the form of the Drell-Yan representation for the cross section, and the use of the covariant parameterization of the deuteron polarization state. Numerical estimates of the radiative corrections are given in the case where event selection allows undetected particles (photons and electron-positron pairs) and the restriction on the lost invariant mass is used.
NASA Astrophysics Data System (ADS)
Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Young-Wook; Choi, Jae-Gu
2014-12-01
Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.
An eigenvalue correction due to scattering by a rough wall of an acoustic waveguide.
Krynkin, Anton; Horoshenkov, Kirill V; Tait, Simon J
2013-08-01
In this paper a derivation of the attenuation factor in a waveguide with stochastic walls is presented. The perturbation method and Fourier analysis are employed to derive asymptotically consistent boundary-value problems at each asymptotic order. The derived approximation predicts the attenuation of the propagating mode in a rough waveguide through a correction to the eigenvalue corresponding to smooth walls. The proposed approach can be used to derive results that are consistent with those obtained by Bass et al. [IEEE Trans. Antennas Propag. 22, 278-288 (1974)]. The novelty of the method is that it does not involve the integral Dyson-type equation and, as a result, the large number of statistical moments included in the equation in the form of the mass operator of the volume scattering theory. The derived eigenvalue correction is described by the correlation function of the randomly rough surface. The averaged solution in the plane wave regime is approximated by the exponential function dependent on the derived eigenvalue correction. The approximations are compared with numerical results obtained using the finite element method (FEM). An approach to retrieve the correct deviation in roughness height and correlation length from multiple numerical realizations of the stochastic surface is proposed to account for the oversampling of the rough surface occurring in the FEM meshing procedure. PMID:23927093
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-08-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)—a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
2013-01-01
Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of
Park, Y; Winey, B; Sharp, G
2014-06-01
Purpose: To demonstrate feasibility of proton dose calculation on scattercorrected CBCT images for the purpose of adaptive proton therapy. Methods: Two CBCT image sets were acquired from a prostate cancer patient and a thorax phantom using an on-board imaging system of an Elekta infinity linear accelerator. 2-D scatter maps were estimated using a previously introduced CT-based technique, and were subtracted from each raw projection image. A CBCT image set was then reconstructed with an open source reconstruction toolkit (RTK). Conversion from the CBCT number to HU was performed by soft tissue-based shifting with reference to the plan CT. Passively scattered proton plans were simulated on the plan CT and corrected/uncorrected CBCT images using the XiO treatment planning system. For quantitative evaluation, water equivalent path length (WEPL) was compared in those treatment plans. Results: The scatter correction method significantly improved image quality and HU accuracy in the prostate case where large scatter artifacts were obvious. However, the correction technique showed limited effects on the thorax case that was associated with fewer scatter artifacts. Mean absolute WEPL errors from the plans with the uncorrected and corrected images were 1.3 mm and 5.1 mm in the thorax case and 13.5 mm and 3.1 mm in the prostate case. The prostate plan dose distribution of the corrected image demonstrated better agreement with the reference one than that of the uncorrected image. Conclusion: A priori CT-based CBCT scatter correction can reduce the proton dose calculation error when large scatter artifacts are involved. If scatter artifacts are low, an uncorrected CBCT image is also promising for proton dose calculation when it is calibrated with the soft-tissue based shifting.
NASA Astrophysics Data System (ADS)
Ramamurthy, Senthil; D'Orsi, Carl J.; Sechopoulos, Ioannis
2016-02-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.
2016-01-01
Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These
NASA Astrophysics Data System (ADS)
Chen, J.; Zebker, H. A.; Knight, R. J.
2015-12-01
InSAR is commonly used to measure surface deformation between different radar passes at cm-scale accuracy and m-scale resolution. However, InSAR measurements are often decorrelated due to vegetation growth, which greatly limits high quality InSAR data coverage. Here we present an algorithm for retrieving InSAR deformation measurements over areas with significant vegetation decorrelation through the use of adaptive interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. The interpolation filter restores phase continuity in space and greatly reduces errors in phase unwrapping. We apply this algorithm to process L-band ALOS interferograms acquired over the San Luis Valley, Colorado and the Tulare Basin, California. In both areas, groundwater extraction for irrigation results in land deformation that can be detected using InSAR. We show that the PS-based algorithm reduces the artifacts from vegetation decorrelation while preserving the deformation signature. The spatial sampling resolution achieved over agricultural fields is on the order of hundreds of meters, usually sufficient for groundwater studies. The improved InSAR data allow us further to reconstruct the SBAS ground deformation time series and transform the measured deformation to head levels using the skeletal storage coefficient and time delay constant inferred from a joint InSAR-well data analysis. The resulting InSAR-head and well-head measurements in the San Luis valley show good agreement with primary confined aquifer pumping activities. This case study demonstrates that high quality InSAR deformation data can be obtained over vegetation-decorrrelated region if processed correctly.
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis.
Lu, Yihuan; Peng, Boyu; Lau, Beverly A; Hu, Yue-Houng; Scaduto, David A; Zhao, Wei; Gindi, Gene
2015-08-21
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)--a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
Noncommutative correction to Aharonov-Bohm scattering: A field theory approach
Anacleto, M.A.; Gomes, M.; Silva, A.J. da; Spehler, D.
2004-10-15
We study a noncommutative nonrelativistic theory in 2+1 dimensions of a scalar field coupled to the Chern-Simons field. In the commutative situation this model has been used to simulate the Aharonov-Bohm effect in the field theory context. We verified that, contrary to the commutative result, the inclusion of a quartic self-interaction of the scalar field is not necessary to secure the ultraviolet renormalizability of the model. However, to obtain a smooth commutative limit the presence of a quartic gauge invariant self-interaction is required. For small noncommutativity we fix the corrections to the Aharonov-Bohm scattering and prove that up to one loop the model is free from dangerous infrared/ultraviolet divergences.
Self-interaction correction in multiple scattering theory: application to transition metal oxides
Daene, Markus W; Lueders, Martin; Ernst, Arthur; Diemo, Koedderitzsch; Temmerman, Walter M; Szotek, Zdzislawa; Wolfam, Hergert
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare the CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2013-01-01
Biological fluids fulfill key functionalities such as hydrating, protecting, and nourishing cells and tissues in various organ systems. They are capable of these versatile tasks owing to their distinct structural and viscoelastic properties. Characterizing the viscoelastic properties of bio-fluids is of pivotal importance for monitoring the development of certain pathologies as well as engineering synthetic replacements. Laser Speckle Rheology (LSR) is a novel optical technology that enables mechanical evaluation of tissue. In LSR, a coherent laser beam illuminates the tissue and temporal speckle intensity fluctuations are analyzed to evaluate mechanical properties. The rate of temporal speckle fluctuations is, however, influenced by both optical and mechanical properties of tissue. Therefore, in this paper, we develop and validate an approach to estimate and compensate for the contributions of light scattering to speckle dynamics and demonstrate the capability of LSR for the accurate extraction of viscoelastic moduli in phantom samples and biological fluids of varying optical and mechanical properties. PMID:23705028
Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections
Kappadath, S. Cheenu; Shaw, Chris C.
2005-11-15
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMA phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter
Probing spectator scattering and annihilation corrections in Bs→P V decays
NASA Astrophysics Data System (ADS)
Chang, Qin; Hu, Xiaohui; Sun, Junfeng; Yang, Yueling
2015-04-01
Motivated by the recent LHCb measurements on B¯s→π-K*+ and B¯s→K±K*∓ decay modes, we revisit the Bs→P V decays within QCD factorization framework. The effects of hard-spectator scattering and annihilation corrections are studied in detail. After performing a χ2-fit on the end-point parameters XAi ,f(ρAi ,f,ϕAi ,f) and XH(ρH,ϕH) with available data, it is found that although some possible mismatches exist, the universalities of XAi ,f and XH in Bs and Bu ,d systems are still allowed within theoretical uncertainties and experimental errors. With the end-point parameters obtained from Bu ,d→P V decays, the numerical results and detailed analyses for the observables of B¯s→π K*, ρ K , π ρ , π ϕ , and K ϕ decay modes are presented. In addition, we have identified a few useful observables, especially the ones of B¯s→π0ϕ decay for instance, for probing hard-spectator scattering and annihilation contributions.
NASA Astrophysics Data System (ADS)
Ryu, Y.; Kobayashi, H.; Welles, J.; Norman, J.
2011-12-01
Correct estimation of gap fraction is essential to quantify canopy architectural variables such as leaf area index and clumping index, which mainly control land-atmosphere interactions. However, gap fraction measurements from optical sensors are contaminated by scattered radiation by canopy and ground surface. In this study, we propose a simple invertible bidirectional transmission model to remove scattering effects from gap fraction measurements. The model shows that 1) scattering factor appears highest where leaf area index is 1-2 in non-clumped canopy, 2) relative scattering factor (scattering factor/measured gap fraction) increases with leaf area index, 3) bright land surface (e.g. snow and bright soil) can contribute a significant scattering factor, 4) the scattering factor is not marginal even in highly diffused sky condition. By incorporating the model with LAI2200 data collected in an open savanna ecosystem, we find that the scattering factor causes significant underestimation of leaf area index (25%) and significant overestimation of clumping index (6 %). The results highlight that some LAI-2000-based LAI estimates from around the world may be underestimated, particularly in highly clumped broad-leaf canopies. Fortunately, the importance of scattering could be assessed with software from LICOR, Inc., which will incorporate the scattering model from this study in a post processing mode after data has been collected by a LAI-2000 or LAI-2200.
Bednarz, Bryan; Lu, Hsiao-Ming; Engelsman, Martijn; Paganetti, Harald
2011-01-01
Monte Carlo models of proton therapy treatment heads are being used to improve beam delivery systems and to calculate the radiation field for patient dose calculations. The achievable accuracy of the model depends on the exact knowledge of the treatment head geometry and time structure, the material characteristics, and the underlying physics. This work aimed at studying the uncertainties in treatment head simulations for passive scattering proton therapy. The sensitivities of spread-out Bragg peak (SOBP) dose distributions on material densities, mean ionization potentials, initial proton beam energy spread and spot size were investigated. An improved understanding of the nature of these parameters may help to improve agreement between calculated and measured SOBP dose distributions and to ensure that the range, modulation width, and uniformity are within clinical tolerance levels. Furthermore, we present a method to make small corrections to the uniformity of spread-out Bragg peaks by utilizing the time structure of the beam delivery. In addition, we re-commissioned the models of the two proton treatment heads located at our facility using the aforementioned correction methods presented in this paper. PMID:21478569
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Cheng, Chuanfu; Teng, Shuyun; Chen, Xiaoyi; Xu, Zhizhan
2007-09-01
A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, m. The average analog of the m samplings output by the Boxcar enhances the signal-to-noise ratio by √{m}, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/√{m}. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique.
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-15
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Darboux, Michel; Bordy, Thomas; Robert-Coutant, Christine; Gonon, Georges
2004-05-01
At CEA-LETI, a DEXA approach for systems using a digital 2D radiographic detector has been developed. It relies on an original X-rays scatter management method, based on a combined use of an analytical model and of scatter calibration data acquired through different thicknesses of Lucite slabs. Since Lucite X-rays interaction properties are equivalent to fat, the approach leads to a scatter flux map representative of a 100% fat region. However, patients" soft tissues are composed of lean and fat. Therefore, the obtained scatter map has to be refined in order to take into account the various fat ratios that can present patients. This refinement consists in establishing a formula relating the fat ratio to the thicknesses of Low and High Energy Lucite slabs leading to same signal level. This proportion is then used to compute, on the basis of X-rays/matter interaction equations, correction factors to apply to Lucite equivalent X-rays scatter map. Influence of fat ratio correction has been evaluated, on a digital 2D bone densitometer, with phantoms composed of a PVC step (simulating bone) and different Lucite/water thicknesses as well as on patients. The results show that our X-rays scatter determination approach can take into account variations of body composition.
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Radiative corrections to the elastic e-p and mu-p scattering in Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Koshchii, Oleksandr; Afanasev, Andrei; MUSE Collaboration
2015-04-01
In this paper, we calculated exactly lepton mass corrections for the elastic e-p and mu-p scatterings using the ELRADGEN 2.1 Monte Carlo generator. These estimations are essential to be used in the MUSE experiment that is designed to solve the proton radius puzzle. This puzzle is due to the fact that two methods of measuring proton radius (the spectroscopy method, which measures proton energy levels in hydrogen, and the electron scattering experiment) predicted the radius to be 0.8768 +/-0.0069 fm, whereas the experiment that used muonic hydrogen provided the value that is 5% smaller. Since the radiative corrections are different for electrons and muons due to their mass difference, these corrections are extremely important for analysis and interpretation of upcoming MUSE data.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and
Gallandi, Lukas; Marom, Noa; Rinke, Patrick; Körzdörfer, Thomas
2016-02-01
The performance of non-empirically tuned long-range corrected hybrid functionals for the prediction of vertical ionization potentials (IPs) and electron affinities (EAs) is assessed for a set of 24 organic acceptor molecules. Basis set-extrapolated coupled cluster singles, doubles, and perturbative triples [CCSD(T)] calculations serve as a reference for this study. Compared to standard exchange-correlation functionals, tuned long-range corrected hybrid functionals produce highly reliable results for vertical IPs and EAs, yielding mean absolute errors on par with computationally more demanding GW calculations. In particular, it is demonstrated that long-range corrected hybrid functionals serve as ideal starting points for non-self-consistent GW calculations. PMID:26731340
2015-11-01
In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278
Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science
2007-06-01
A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.
Modulator design for x-ray scatter correction using primary modulation: Material selection
Gao Hewei; Zhu Lei; Fahrig, Rebecca
2010-08-15
Purpose: An optimal material selection for primary modulator is proposed in order to minimize beam hardening of the modulator in x-ray cone-beam computed tomography (CBCT). Recently, a measurement-based scatter correction method using primary modulation has been developed and experimentally verified. In the practical implementation, beam hardening of the modulator blocker is a limiting factor because it causes inconsistency in the primary signal and therefore degrades the accuracy of scatter correction. Methods: This inconsistency can be purposely assigned to the effective transmission factor of the modulator whose variation as a function of object filtration represents the magnitude of beam hardening of the modulator. In this work, the authors show that the variation reaches a minimum when the K-edge of the modulator material is near the mean energy of the system spectrum. Accordingly, an optimal material selection can be carried out in three steps. First, estimate and evaluate the polychromatic spectrum for a given x-ray system including both source and detector; second, calculate the mean energy of the spectrum and decide the candidate materials whose K-edge energies are near the mean energy; third, select the optimal material from the candidates after considering both the magnitude of beam hardening and the physical and chemical properties. Results: A tabletop x-ray CBCT system operated at 120 kVp is used to validate the material selection method in both simulations and experiments, from which the optimal material for this x-ray system is then chosen. With the transmission factor initially being 0.905 and 0.818, simulations show that erbium provides the least amount of variation as a function of object filtrations (maximum variations are 2.2% and 4.3%, respectively, only one-third of that for copper). With different combinations of aluminum and copper filtrations (simulating a range of object thicknesses), measured overall variations are 2.5%, 1.0%, and 8
2015-12-01
In the article by Narayan et al (Narayan O, Davies JE, Hughes AD, Dart AM, Parker KH, Reid C, Cameron JD. Central aortic reservoir-wave analysis improves prediction of cardiovascular events in elderly hypertensives. Hypertension. 2015;65:629–635. doi: 10.1161/HYPERTENSIONAHA.114.04824), which published online ahead of print December 22, 2014, and appeared in the March 2015 issue of the journal, some corrections were needed.On page 632, Figure, panel A, the label PRI has been corrected to read RPI. In panel B, the text by the upward arrow, "10% increase in kd,” has been corrected to read, "10% decrease in kd." The corrected figure is shown below.The authors apologize for these errors. PMID:26558821
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
NASA Astrophysics Data System (ADS)
Blanco, Francisco; Ellis-Gibbings, Lilian; García, Gustavo
2016-02-01
An improvement of the screening-corrected Additivity Rule (SCAR) is proposed for calculating electron and positron scattering cross sections from polyatomic molecules within the independent atom model (IAM), following the analysis of numerical solutions to the three-dimensional Lippmann-Schwinger equation for multicenter potentials. Interference contributions affect all the considered energy range (1-300 eV); the lower energies where the atomic screening is most effective and higher energies, where interatomic distances are large compared to total cross sections and electron wavelengths. This correction to the interference terms provides a significant improvement for both total and differential elastic cross sections at these energies.
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.
A. Afanasev, I. Akushevich, A. Ilyichev, N. Merenkov
2003-09-01
The main features of the electron structure method for calculations of the higher order QED radiative effects to polarized deep-inelastic ep-scattering are presented. A new FORTRAN code ESFRAD based on this method was developed. A detailed quantitative comparison between the results of ESFRAD and other methods implemented in the codes POLRAD and RADGEN for calculation of the higher order radiative corrections is performed.
NASA Astrophysics Data System (ADS)
Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander
2006-10-01
The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
NASA Astrophysics Data System (ADS)
1995-04-01
Seismic images of the Brooks Range, Arctic Alaska, reveal crustal-scale duplexing: Correction Geology, v. 23, p. 65 68 (January 1995) The correct Figure 4A, for the loose insert, is given here. See Figure 4A below. Corrected inserts will be available to those requesting copies of the article from the senior author, Gary S. Fuis, U.S. Geological Survey, 345 Middlefield Road, Menlo Park, CA 94025. Figure 4A. P-wave velocity model of Brooks Range region (thin gray contours) with migrated wide-angle reflections (heavy red lines) and migreated vertical-incidence reflections (short black lines) superimposed. Velocity contour interval is 0.25 km/s; 4,5, and 6 km/s contours are labeled. Estimated error in velocities is one contour interval. Symbols on faults shown at top are as in Figure 2 caption.
Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies
NASA Astrophysics Data System (ADS)
Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing
2016-03-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.
2016-02-01
Neogi T, Jansen TLTA, Dalbeth N, et al. 2015 Gout classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative. Ann Rheum Dis 2015;74:1789–98. The name of the 20th author was misspelled. The correct spelling is Janitzia Vazquez-Mellado. We regret the error. PMID:26881284
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-14
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
2016-02-01
In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
Ramamurthy, S; Sechopoulos, I
2014-06-15
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
Park, C G; Ha, B
1995-09-01
Most of the attempts and efforts in cleft lip repair have been directed toward the skin incision. The importance of the orbicularis oris muscle repair has been emphasized in recent years. The well-designed skin incision with simple repair of the orbicularis oris muscle has produced a considerable improvement in the appearance of the upper lip; however, the repaired upper lip seems to change its shape abnormally in motion and has a tendency to be distorted with age if the orbicularis oris muscle is not repaired precisely and accurately. Following the dissection of the normal upper lip and unilateral cleft lip in cadavers, we could find two different components in the orbicularis oris muscle, a superficial and a deep component. One is a retractor and the other is a constrictor of the lip. They have antagonistic actions to each other during lip movement. We also can identify these two different components of the muscle in the cleft lip patient during operation. We thought inaccurate and mixed connection between these two different functional components could make the repaired lip distorted and unbalanced, which would get worse during growth. By identification and separate repair of the two different muscular components of the orbicularis oris muscle (i.e., repair of the superficial and deep components on the lateral side with the corresponding components on the medial side), better results in the dynamic and three-dimensional configuration of the upper lip can be achieved, and unfavorable distortion can be avoided as the patients grow.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7652051
NASA Astrophysics Data System (ADS)
Roger, Michel; Moreau, Stéphane
2005-09-01
A previously published analytical formulation aimed at predicting broadband trailing-edge noise of subsonic airfoils is extended here to account for all the effects due to a limited chord length, and to infer the far-field radiation off the mid-span plane. Three-dimensional gusts are used to simulate the incident aerodynamic wall pressure that is scattered as acoustic waves. A leading-edge back-scattering correction is derived, based on the solution of an equivalent Schwarzschild problem, and added to the original formula. The full solution is found to agree very well with other analytical results based on a vanishing Mach number Green's function tailored to a finite-chord flat plate and sources close to the trailing edge. Furthermore, it is valid for any subsonic ambient mean flow velocity. The back-scattering correction is shown to have a significant effect at lower reduced frequencies, for which the airfoil chord is acoustically compact, and at the transition between supercritical and subcritical gusts. It may be important for small-size airfoils, such as automotive fan blades and similar technologies. The final far-field noise formula can be used to predict trailing-edge noise in an arbitrary configuration, provided that a minimum statistical description of the aerodynamic pressure fluctuations on the airfoil surface close to the trailing edge is available.
Fortmann, Carsten; Wierling, August; Roepke, Gerd
2010-02-15
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r{sub s}. These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
Fortmann, Carsten; Wierling, August; Röpke, Gerd
2010-02-01
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed. PMID:20365663
NASA Astrophysics Data System (ADS)
Fortmann, Carsten; Wierling, August; Röpke, Gerd
2010-02-01
The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters rs . These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.
2015-05-22
The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426
NASA Astrophysics Data System (ADS)
1998-12-01
Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
NASA Astrophysics Data System (ADS)
Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.
2015-03-01
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
Szidarovszky, Tamás; Császár, Attila G.
2015-01-07
The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
NASA Astrophysics Data System (ADS)
Mann, Steve D.; Tornai, Martin P.
2015-03-01
Solid state Cadmium Zinc Telluride (CZT) gamma cameras for SPECT imaging offer significantly improved energy resolution compared to traditional scintillation detectors. However, the photopeak resolution is often asymmetric due to incomplete charge collection within the detector, resulting in many photopeak events incorrectly sorted into lower energy bins ("tailing"). These misplaced events contaminate the true scatter signal, which may negatively impact scatter correction methods that rely on estimates of scatter from the spectra. Additionally, because CZT detectors are organized into arrays, each individual detector element may exhibit different degrees of tailing. Here, we present a modified dualenergy window scatter correction method for emission detection and imaging that attempts to account for positiondependent effects of incomplete charge collection in the CZT gamma camera of our dedicated breast SPECT-CT system. Point source measurements and geometric phantoms were used to estimate the impact of tailing on the scatter signal and extract a better estimate of the ratio of scatter within two energy windows. To evaluate the method, cylindrical phantoms with and without a separate fillable chamber were scanned to determine the impact on quantification in hot, cold, and uniform background regions. Projections were reconstructed using OSEM, and the results for the traditional and modified scatter correction methods were compared. Results show that while modest reduced quantification accuracy was observed in hot and cold regions of the multi-chamber phantoms, the modified scatter correction method yields up to 8% improved quantification accuracy with 4% less added noise than the traditional DEW method within uniform background regions.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Notaros, B. M.; Bringi, V. N.; Kleinkort, C.; Huang, G. J.; Kennedy, P.; Thurai, M.
2015-12-01
We present a novel approach to remote sensing and characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced in-situ instrumentation for microphysical and geometrical measurements of ice and snow particles, image processing methodology to reconstruct complex particle three-dimensional (3D) shapes, computational electromagnetics to analyze realistic precipitation scattering, and state-of-the-art polarimetric radar. Our in-situ measurement site at the Easton Valley View Airport, La Salle, Colorado, shown in the figure, consists of two advanced optical imaging disdrometers within a 2/3-scaled double fence intercomparison reference wind shield, and also includes PLUVIO snow measuring gauge, VAISALA weather station, and collocated NCAR GPS advanced upper-air system sounding system. Our primary radar is the CSU-CHILL radar, with a dual-offset Gregorian antenna featuring very high polarization purity and excellent side-lobe performance in any plane, and the in-situ instrumentation site being very conveniently located at a range of 12.92 km from the radar. A multi-angle snowflake camera (MASC) is used to capture multiple different high-resolution views of an ice particle in free-fall, along with its fall speed. We apply a visual hull geometrical method for reconstruction of 3D shapes of particles based on the images collected by the MASC, and convert these shapes into models for computational electromagnetic scattering analysis, using a higher order method of moments. A two-dimensional video disdrometer (2DVD), collocated with the MASC, provides 2D contours of a hydrometeor, along with the fall speed and other important parameters. We use the fall speed from the MASC and the 2DVD, along with state parameters measured at the Easton site, to estimate the particle mass (Böhm's method), and then the dielectric constant of particles, based on a Maxwell-Garnet formula. By calculation of the "particle-by-particle" scattering
Zhou Haiqing; Kao Chungwen; Yang Shinnan
2007-12-31
Leading electroweak corrections play an important role in precision measurements of the strange form factors. We calculate the two-photon-exchange (TPE) and {gamma}Z-exchange corrections to the parity-violating asymmetry of the elastic electron-proton scattering in a simple hadronic model including the finite size of the proton. We find both can reach a few percent and are comparable in size with the current experimental measurements of strange-quark effects in the proton neutral weak current. The effect of {gamma}Z exchange is in general larger than that of TPE, especially at low momentum transfer Q{sup 2}{<=}1 GeV{sup 2}. Their combined effects on the values of G{sub E}{sup s}+G{sub M}{sup s} extracted in recent experiments can be as large as -40% in certain kinematics.
Effect of background thermal radiation on radiative correction to elastic scattering of electrons
Zaleski, H. )
1989-11-01
Calculations of the energy dependence of the electron-scattering cross section in the presence of thermal background radiation (Planck's field) are done in the semiclassical approximation. It is shown that the cross section remains finite and is peaked around the initial energy with the width proportional to the radiation temperature.
Zheng, Tianyu; Bott, Steven; Huo, Qun
2016-08-24
Gold nanoparticles (AuNPs) have found broad applications in chemical and biological sensing, catalysis, biomolecular imaging, in vitro diagnostics, cancer therapy, and many other areas. Dynamic light scattering (DLS) is an analytical tool used routinely for nanoparticle size measurement and analysis. Due to its relatively low cost and ease of operation in comparison to other more sophisticated techniques, DLS is the primary choice of instrumentation for analyzing the size and size distribution of nanoparticle suspensions. However, many DLS users are unfamiliar with the principles behind the DLS measurement and are unware of some of the intrinsic limitations as well as the unique capabilities of this technique. The lack of sufficient understanding of DLS often leads to inappropriate experimental design and misinterpretation of the data. In this study, we performed DLS analyses on a series of citrate-stabilized AuNPs with diameters ranging from 10 to 100 nm. Our study shows that the measured hydrodynamic diameters of the AuNPs can vary significantly with concentration and incident laser power. The scattered light intensity of the AuNPs has a nearly sixth order power law increase with diameter, and the enormous scattered light intensity of AuNPs with diameters around or exceeding 80 nm causes a substantial multiple scattering effect in conventional DLS instruments. The effect leads to significant errors in the reported average hydrodynamic diameter of the AuNPs when the measurements are analyzed in the conventional way, without accounting for the multiple scattering. We present here some useful methods to obtain the accurate hydrodynamic size of the AuNPs using DLS. We also demonstrate and explain an extremely powerful aspect of DLS-its exceptional sensitivity in detecting gold nanoparticle aggregate formation, and the use of this unique capability for chemical and biological sensing applications. PMID:27472008
Moal, S; Portier, M; Kim, J; Dugué, J; Rapol, U D; Leduc, M; Cohen-Tannoudji, C
2006-01-20
We present a new measurement of the s-wave scattering length a of spin-polarized helium atoms in the 2(3)S1 metastable state. Using two-photon photoassociation spectroscopy and dark resonances, we measure the energy E(nu)=14= -91.35+/- 0.06 MHz of the least-bound state nu = 14 in the interaction potential of the two atoms. We deduce a value of a=7.512+/-0.005 nm, which is at least 100 times more precise than the best previous determinations and is in disagreement with some of them. This experiment also demonstrates the possibility to create exotic molecules binding two metastable atoms with a lifetime of the order of 1 micros. PMID:16486572
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2015-05-01
There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.
Weir, Alexander J; Sayer, Robin; Cheng-Xiang Wang; Parks, Stuart
2015-08-01
Medical phantoms are frequently required to verify image and signal processing systems, and are often used to support algorithm development for a wide range of imaging and blood flow assessments. A phantom with accurate scattering properties is a crucial requirement when assessing the effects of multi-path propagation channels during the development of complex signal processing techniques for Transcranial Doppler (TCD) ultrasound. The simulation of physiological blood flow in a phantom with tissue and blood equivalence can be achieved using a variety of techniques. In this paper, poly (vinyl alcohol) cryogel (PVA-C) tissue mimicking material (TMM) is evaluated in conjunction with a number of potential scattering agents. The acoustic properties of the TMMs are assessed and an acoustic velocity of 1524ms(-1), an attenuation coefficient of (0:49) × 10(-4)fdBm(1)Hz(-1), a characteristic impedance of (1.72) × 10(6)Kgm(-2)s(-1) and a backscatter coefficient of (1.12) × 10(-28)f(4)m(-1)Hz(-4)sr(-1) were achieved using 4 freeze-thaw cycles and an aluminium oxide (Al(2)O(3)) scattering agent. This TMM was used to make an anatomically realistic wall-less flow phantom for studying the effects of multipath propagation in TCD ultrasound. PMID:26736851
Isospin breaking corrections to low-energy π-K scattering
NASA Astrophysics Data System (ADS)
Nehme, A.; Talavera, P.
2002-03-01
We evaluate the matrix elements for the processes π0K0-->π0K0 and π-K+-->π0K0 in the presence of isospin breaking terms at leading and next-to-leading order. As a direct application the relevant combination of the S-wave scattering lengths involved in the pion-kaon atom lifetime is determined. We discuss the sensitivity of the results with respect to the input parameters.
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Lee, Hoyeon; Cho, Seungryong
2016-03-01
Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.
Renger, Bernhard; Brieskorn, Carina; Toth, Vivien; Mentrup, Detlef; Jockel, Sascha; Lohöfer, Fabian; Schwarz, Martin; Rummeny, Ernst J; Noël, Peter B
2016-06-01
Bedside chest X-rays (CXR) for catheter position control may add up to a considerable radiation dose for patients in the intensive care unit (ICU). In this study, image quality and dose reduction potentials of a novel X-ray scatter correction software (SkyFlow, Philips Healthcare, Hamburg, Germany) were evaluated. CXRs of a 'LUNGMAN' (Kyoto Kagaku Co., LTD, Kyoto, Japan) thoracic phantom with a portacath system, a central venous line and a dialysis catheter were performed in an experimental set-up with multiple tube voltage and tube current settings without and with an antiscatter grid. Images with diagnostic exposure index (EI) 250-500 were evaluated for the difference in applied mAs with and without antiscatter grid. Three radiologists subjectively assessed the diagnostic image quality of grid and non-grid images. Compared with a non-grid image, usage of an antiscatter grid implied twice as high mAs in order to reach diagnostic EI. SkyFlow significantly improved the image quality of images acquired without grid. CXR with grid provided better image contrast than grid-less imaging with scatter correction. PMID:26977074
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Ouyang, Luo; Lee, Huichen Pam; Wang, Jing
2015-01-01
Purpose To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., "blocker") consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV-MV scatter correction. Results Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield unites (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV-MV scatter induced HU inaccuracy and cupping artifacts. PMID:26026484
NASA Astrophysics Data System (ADS)
Zhang, Hao; Li, Lihong; Zhu, Hongbin; Lin, Qin; Harrington, Donald; Liang, Zhengrong
2012-03-01
Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structure. However, the high-density contrast agents tend to introduce the scatter effect on neighboring soft tissues and elevate their observed CT attenuation values toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) where pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a scale-based scatter correction as a preprocessing procedure into our previous ECC pipeline based on the maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation. The newly proposed ECC scheme takes into account both scatter effect and partial volume effect that commonly appear in CTC images. We evaluated the new method with 10 patient CTC studies and found improved performance. Our results suggest that the proposed strategy is effective with potentially significant benefits for both clinical CTC examinations and automatic computer-aided detection (CAD) of colon polyps.
NASA Astrophysics Data System (ADS)
Kivel, N.; Vanderhaeghen, M.
2013-04-01
We calculate the two-photon exchange (TPE) corrections in the region where the kinematical variables describing the elastic ep scattering are moderately large momentum scales relative to the soft hadronic scale. For such kinematics we use the QCD factorization approach formulated in the framework of the soft-collinear effective theory (SCET). Such technique allows us to develop a description for the soft-spectator scattering contribution which is found to be important in the region of moderately large scales. Together with the hard-spectator contribution we present the complete factorization formulas for the TPE amplitudes at the leading power and leading logarithmic accuracy. The momentum region where both photons are hard is described by only one new nonperturbative SCET form factor. It turns out that the same form factor also arises for wide-angle Compton scattering which is also described in the framework of the SCET approach. This allows us to estimate the soft-spectator contribution associated with the hard photons in a model independent way. The main unknown in our description of the TPE contribution is related with the configuration where one photon is soft. The nonperturbative dynamics in this case is described by two unknown SCET amplitudes. We use a simple model in order to estimate their contribution. The formalism is then applied to a phenomenological analysis of existing data for the reduced cross section as well as for the transverse and longitudinal polarization observables.
Chun, Se Young
2016-03-01
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855
The use of symmetry to correct Larmor phase aberrations in spin echo scattering angle measurement
NASA Astrophysics Data System (ADS)
Pynn, Roger; Lee, W. T.; Stonaha, P.; Shah, V. R.; Washington, A. L.; Kirby, B. J.; Majkrzak, C. F.; Maranville, B. B.
2008-06-01
Spin echo scattering angle measurement (SESAME) is a sensitive interference technique for measuring neutron diffraction. The method uses waveplates or birefringent prisms to produce a phase separation (the Larmor phase) between the "up" and "down" spin components of a neutron wavefunction that is initially prepared in a state that is a linear combination of in-phase up and down components. For neutrons, uniformly birefringent optical elements can be constructed from closed solenoids with appropriately shaped cross sections. Such elements are inconvenient in practice, however, both because of the precision they demand in the control of magnetic fields outside the elements and because of the amount of material required in the neutron beam. In this paper, we explore a different option in which triangular-cross-section solenoids used to create magnetic fields for SESAME have gaps in one face, allowing the lines of magnetic flux to "leak out" of the solenoid. Although the resulting field inhomogeneity produces aberrations in the Larmor phase, the symmetry of the solenoid gaps causes the aberrations produced by neighboring pairs of triangular solenoids to cancel to a significant extent. The overall symmetry of the SESAME apparatus leads to further cancellations of aberrations, providing an architecture that is easy to construct and robust in performance.
NASA Astrophysics Data System (ADS)
Wuhrer, R.; Moran, K.
2014-03-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.
NASA Astrophysics Data System (ADS)
Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Moo Lim, Sang; Kim, Hee-Joung
2015-01-01
Image correction for scattered photons is important for the quantification of gamma-camera imaging using I-131. Many previous studies have addressed this issue but none have compared scattered photon fractions of I-131 with varying energy windows, to determine optimal main- and sub-energy windows for the implementation of TEW correction in I-131 imaging. We assessed the scattered photon fractions and determined the optimal main- and sub- energy windows for TEW in I-131 using a Siemens SYMBIA T2 SPECT/CT using a Monte Carlo method (GATE simulation). To validate the GATE simulation code, we compared the spatial resolutions obtained experimentally and from GATE simulation, for I-123 and Tc-99m. A high-energy general purpose (HE) collimator was used to assess the scattered photon fractions measured with the I-131 radioisotope placed at eight different field-of-view locations in a water phantom (diameter 16 cm, length 32 cm), and at the center in air. To implement the TEW (triple energy window) method, two different main-energy window widths (15 and 20%) and two different sub-energy window widths (3 and 5 keV) were used. The experimental measurement and simulation results exhibited a similar pattern with < 15% difference in spatial resolution with increasing distance. The I-131 scatter fraction with 15% of the main-energy window and 5 keV sub-energy windows was similar to the ``goldstandard'' scatter fraction. Main- and sub-energy window selection for the TEW correction in I-131 is important to avoid over- or under-correction in the scatter fraction. A 15% of main energy window with 5 keV sub-energy windows were found to be optimal for implementation of the TEW method in I-131. This result provides the optimal energy window for I-131 scintigraphy data and will aid the quantification of I-131 imaging.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
Straylight correction to Doppler rotation measurements
NASA Astrophysics Data System (ADS)
Andersen, B. N.
1985-07-01
The correction of the Pierce and LoPresto (1984) Doppler data on the plasma rotation rate for stray light increases the observed equatorial rotation velocity from 1977 to 2004 m/sec. This correction has an uncertainty of approximately 10 m/sec, because the accurate form of the stray light function is not available. The correction is noted to be largest for the blue lines, in virtue of increased scattering, and for the weak lines, due to the limb effect.
Hawke, J; Scannell, R; Maslov, M; Migozzi, J B
2013-10-01
This work isolated the cause of the observed discrepancy between the electron temperature (T(e)) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. The application of these correction factors resulted in an increase in the observed T(e), resulting in the partial if not complete removal of the observed discrepancy in the measured T(e) between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics. PMID:24188274
Hawke, J.; Scannell, R.; Maslov, M.; Migozzi, J. B.; Collaboration: JET-EFDA Contributors
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. The application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.
NASA Astrophysics Data System (ADS)
Kim, J. H.; Kim, S. W.; Yoon, S. C.; Park, R.; Ogren, J. A.
2014-12-01
Filter-based instrument, such as aethalometer, is being widely used to measure equivalent black carbon(EBC) mass concentration and aerosol absorption coefficient(AAC). However, many other previous studies have poited that AAC and its aerosol absorption angstrom exponent(AAE) are strongly affected by the multi-scattering correction factor(C) when we retrieve AAC from aethalometer EBC mass concentration measurement(Weingartner et al., 2003; Arnott et al., 2005; Schmid et al., 2006; Coen et al., 2010). We determined the C value using the method given in Weingartner et al. (2003) by comparing 7-wavelngth aethalometer (AE-31, Magee sci.) to 3-wavelength Photo-Acoustic Soot Spectrometer (PASS-3, DMT) at Gosan climate observatory, Korea(GCO) during Cheju ABC plume-asian monsoon experiment(CAPMEX) campaign(August and September, 2008). In this study, C was estimated to be 4.04 ± 1.68 at 532 nm and AAC retrieved with this value was decreased as approximately 100% as than that retrieved with soot case value from Weingartner et al (2003). We compared the AAC determined from aethalomter measurements to that from collocated Continuous Light Absorption Photometer (CLAP) measurements from January 2012 to December 2013 at GCO and found good agreement in both AAC and AAE. This result suggests the determination of site-specific C is crucially needed when we calculate AAC from aethalometer measurements.
NASA Astrophysics Data System (ADS)
Dickerson, Edward C.
Quality assurance in radiation oncology treatment planning requires independent verification of dose to be delivered to a patient through "second check" calculations for simple plans as well as planar dose fluence measurements for more complex treatments, such as intensity modulated radiation treatments (IMRT). Discrepancies between treatment planning system (TPS) and second check calculations created a need for treatment plan verification using a two dimensional diode array for Enhanced Dynamic Wedge (EDW) fields. While these measurements met clinical standards for treatment, they revealed room for improvement in the EDW model. The purpose of this study is to analyze the head scatter and jaw transmission effects of the moving jaw in EDW fields by measuring dose profiles with a two dimensional diode array in order to minimize differences between the manufacturer provided fluence table (Golden Segmented Treatment Table) and actual machine output. The jaw transmission effect reduces the dose gradient in the wedge direction due to transmission photons adding dose to the heel region of the field. The head scatter effect also reduces the gradient in the dose profile due to decreased accelerator output at increasingly smaller field sizes caused by the moving jaw. The field size continuously decreases with jaw motion, and thus the toe region of the wedge receives less dose than anticipated due to less head scatter contribution for small field sizes. The Golden Segmented Treatment Table (GSTT) does not take these factors into account since they are specific to each individual machine. Thus, these factors need to be accounted for in the TPS to accurately model the gradient of the wedge. The TPS used in this clinic uses one correction factor (transmission factor) to account for both effects since both factors reduce the dose gradient of the wedge. Dose profile measurements were made for 5x5 cm2, 10x10 cm2, and 20x20 cm2 field sizes with open fields and 10°, 15°, 20°, 25
NASA Astrophysics Data System (ADS)
Augustynek, T.; Battaglia, A.; Kollias, P.
2011-12-01
The primary goal of this work is to address several challenges related to spaceborne Doppler radars like future the EarthCARE mission and recent developments of data simulation, correction and processing. The 94 GHz Cloud Profiling Radar onboard the ESA EarthCARE mission will be the first radar in space with Doppler capability allowing mean Doppler velocity measurements. This will enable more accurate characterization of clouds and precipitation (classification, retrieval accuracy, dynamics). It is the only instrument of this kind planned for the immediate post-CloudSat era and represents an irreplaceable asset in regards to climate change studies. Meeting the scientific accuracy requirements of vertical motions of 1 m/s, with a horizontal resolution of 1 km, is very challenging. The five key factors that control the performance of spaceborne radar will be discussed, such as: contribution of multiple scattering (MS), attenuation, velocity folding, non uniform beam filling (NUBF) and effects of along track integration of the signal. The research utilizes an end-to-end simulator for spaceborne Doppler radars. The simulator uses a Monte Carlo module which accounts for MS and produces ideal Doppler spectra as measured by a spaceborne radar flying over 3D highly resolved scenes produced via WRF Model simulations. The estimates of the Doppler moments (reflectivity, mean Doppler velocity and spectrum width) are achieved via the pulse pair technique. The objective method for identification of MS-contaminated range-bins based purely on the reflectivity-derived variables is described, with most important one, cumulative integrated reflectivity, found to be 41 dBZ_int which serves as the threshold value for identification of radar range gates contaminated by MS. This is further demonstrated in a CloudSat case study with the threshold value for CloudSat is found to be 41.9 dBZ_int. The unfolding procedure of Doppler velocities will be presented. Then we will describe the
Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C
2014-06-15
Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively.
NASA Astrophysics Data System (ADS)
Durand, Loyal; Johnson, James M.; Lopez, Jorge L.
1992-05-01
We reexamine the unitarity constraints on the high-energy scattering of longitudinally polarized W's and Z's and Higgs bosons in the standard model including one-loop corrections. Using an Argand diagram analysis, we find that the j=0 scattering amplitudes are approximately unitary and weakly interacting at order λ2 for Higgs-boson couplings λ(s,M2H)<~2, but that corrections of order λ3 or higher must be included to restore perturbative unitarity for larger values of λ. We show also that two-loop [O(λ3)] corrections cannot extend the range of validity of perturbation theory beyond λ~=2.2. An analysis of inelastic 2-->4 scattering in the W+/-L,ZL, H system gives an independent but weaker limit λ(s,M2H)<~5. The limit λ(s,M2H)<2 translates to a physical-Higgs-boson mass MH<~400 GeV if the bound is to hold up to energies of a few TeV, or MH<~160 GeV in perturbatively unified theories with a mass scale of order 1015 GeV. For masses much larger than these bounds, low-order perturbation theory fails and the Higgs sector of the standard model becomes effectively strongly interacting.
NASA Astrophysics Data System (ADS)
Rosenberg, P. D.; Dean, A. R.; Williams, P. I.; Dorsey, J. R.; Minikin, A.; Pickering, M. A.; Petzold, A.
2012-05-01
Optical particle counters (OPCs) are used regularly for atmospheric research, measuring particle scattering cross sections to generate particle size distribution histograms. This manuscript presents two methods for calibrating OPCs with case studies based on a Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP), both of which are operated on the Facility for Airborne Atmospheric Measurements BAe-146 research aircraft. A probability density function based method is provided for modification of the OPC bin boundaries when the scattering properties of measured particles are different to those of the calibration particles due to differences in refractive index or shape. This method provides mean diameters and widths for OPC bins based upon Mie-Lorenz theory or any other particle scattering theory, without the need for smoothing, despite the highly nonlinear and non-monotonic relationship between particle size and scattering cross section. By calibrating an OPC in terms of its scattering cross section the optical properties correction can be applied with minimal information loss, and performing correction in this manner provides traceable and transparent uncertainty propagation throughout the whole process. Analysis of multiple calibrations has shown that for the PCASP the bin centres differ by up to 30% from the manufacturer's nominal values and can change by up to approximately 20% when routine maintenance is performed. The CDP has been found to be less sensitive than the manufacturer's specification with differences in sizing of between 1.6 ± 0.8 μm and 4.7 ± 1.8 μm for one flight. Over the course of the Fennec project in the Sahara the variability of calibration was less than the calibration uncertainty in 6 out of 7 calibrations performed. As would be expected from Mie-Lorenz theory, the impact of the refractive index corrections has been found to be largest for absorbing materials and the impact on Saharan dust measurements made
Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo
2015-11-01
The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm. PMID:26596202
Frolov, Alexei M.; Wardlaw, David M.
2014-09-14
Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.
In a study of an inductively coupled plasma optical emission spectrometer, data from an early commercially available instrument are compared with data from the same instrument after modifications to correct observed inadequacies were made. Results show negligible changes in power...
Graudenz, D. )
1994-04-01
Jet cross sections in deeply inelastic scattering in the case of transverse photon exchange for the production of (1+1) and (2+1) jets are calculated in next-to-leading-order QCD (here the +1'' stands for the target remnant jet, which is included in the jet definition). The jet definition scheme is based on a modified JADE cluster algorithm. The calculation of the (2+1) jet cross section is described in detail. Results for the virtual corrections as well as for the real initial- and final-state corrections are given explicitly. Numerical results are stated for jet cross sections as well as for the ratio [sigma][sub (2+1) jet]/[sigma][sub tot] that can be expected at E665 and DESY HERA. Furthermore the scale ambiguity of the calculated jet cross sections is studied and different parton density parametrizations are compared.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )
1991-11-15
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.
Semchishen, A V; Seminogov, V N; Semchishen, V A
2012-04-30
Forward scattering of light passing through large-scale irregularities of the interface between two media having different refractive indices is considered. An analytical expression for the ratio of intensities of directional and diffusion components of scattered light in the far-field zone is derived. It is theoretically shown that the critical depth of possible interface relief irregularities, starting from which the intensity of the diffuse component in the passing light flow becomes comparable with the directional light component, responsible for the image formation on the eye retina, is 3 - 4 {mu}m, with the increase in the refractive index in the postoperational zone taken into account. These profile depth values agree with the experimentally measured ones and may affect the contrast sensitivity of vision.
NASA Astrophysics Data System (ADS)
Semchishen, A. V.; Seminogov, V. N.; Semchishen, V. A.
2012-04-01
Forward scattering of light passing through large-scale irregularities of the interface between two media having different refractive indices is considered. An analytical expression for the ratio of intensities of directional and diffusion components of scattered light in the far-field zone is derived. It is theoretically shown that the critical depth of possible interface relief irregularities, starting from which the intensity of the diffuse component in the passing light flow becomes comparable with the directional light component, responsible for the image formation on the eye retina, is 3 — 4 μm, with the increase in the refractive index in the postoperational zone taken into account. These profile depth values agree with the experimentally measured ones and may affect the contrast sensitivity of vision.
NASA Astrophysics Data System (ADS)
Holstensson, M.; Erlandsson, K.; Poludniowski, G.; Ben-Haim, S.; Hutton, B. F.
2015-04-01
An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of 99mTc and 123I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of 99mTc and 123I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both 99mTc and 123I, results show that the estimated spatial distribution of events from 99mTc in the 99mTc photopeak energy window is very similar to that measured in a single 99mTc phantom study. The extracted images of primary events display increased cold lesion contrasts for both 99mTc and 123I.
Utsav, K C; Varghese, Philip L
2013-07-10
A multiple-pass cell is aligned to focus light at two regions at the center of the cell. The two "points" are separated by 2.0 mm. Each probe region is 200 μm×300 μm. The cell is used to amplify spontaneous Raman scattering from a CH4-air laminar flame. The signal gain is 20, and the improvement in signal-to-noise ratio varies according to the number of laser pulses used for signal acquisition. The temperature is inferred by curve fitting high-resolution spectra of the Stokes signal from N2. The model accounts for details, such as the angular dependence of Raman scattering, the presence of a rare isotope of N2 in air, anharmonic oscillator terms in the vibrational polarizability matrix elements, and the dependence of Herman-Wallis factors on the vibrational level. The apparatus function is modeled using a new line shape function that is the convolution of a trapezoid function and a Lorentzian. The uncertainty in the value of temperature arising from noise, the uncertainty in the model input parameters, and various approximations in the theory have been characterized. We estimate that the uncertainty in our measurement of flame temperature in the least noisy data is ±9 K. PMID:23852217
NASA Astrophysics Data System (ADS)
Gorchtein, Mikhail
2014-11-01
Two-photon-exchange (TPE) contributions to elastic electron-proton scattering in the forward regime in leading logarithmic ˜t ln|t | approximation in the momentum transfer t are considered. The imaginary part of the TPE amplitude in the forward kinematics is related to the total photoabsorption cross section. The real part of the TPE amplitude is obtained from an unsubtracted fixed-t dispersion relation. This allows a clean prediction of the real part of the TPE amplitude at forward angles with the leading term ˜t ln|t | . Numerical estimates are comparable with or exceed the experimental precision in extracting the charge radius from the experimental data.
Bistatic scattering from a cone frustum
NASA Technical Reports Server (NTRS)
Ebihara, W.; Marhefka, R. J.
1986-01-01
The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.
Holstensson, M; Erlandsson, K; Poludniowski, G; Ben-Haim, S; Hutton, B F
2015-04-21
An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of (99m)Tc and (123)I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of (99m)Tc and (123)I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both (99m)Tc and (123)I, results show that the estimated spatial distribution of events from (99m)Tc in the (99m)Tc photopeak energy window is very similar to that measured in a single (99m)Tc phantom study. The extracted images of primary events display increased cold lesion contrasts for both (99m)Tc and (123)I. PMID:25803643
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2015-08-01
The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Schlösser, Magnus; Rupp, Simone; Brunst, Tim; James, Timothy M
2015-05-01
The U.S. National Institute of Standards and Technology (NIST) has certified a set of Standard Reference Materials (SRMs) that can be used to accurately determine the spectral sensitivity of Raman spectrometers. These solid-state reference sources offer benefits such as exact reproduction of Raman sampling geometry, simple implementation, and long-term stability. However, a serious drawback of these SRMs is that they are certified only in the backscattering (180°) configuration. In this study, we investigated if and how SRM 2242 (applicable for 532 nm) can be employed in a 90°-scattering geometry Raman system. We found that the measurement procedure needs to be modified to comply with the certified uncertainty provided by NIST. This requires a change in the SRM illumination that is possible only if we polish the side surfaces. In addition, we need to account for the polarization configuration of the Raman system by choosing the appropriate polarization of the excitation beam. On top of that, the spatial inhomogeneity of the luminescence light needs to be taken into account, as well as its behavior while traveling through the SRM bulk. Finally, we show in a round-robin test that the resulting uncertainty for the quantification of Raman spectra using the modified technique is on the order of ±1.5 percentage points. PMID:25811283
NASA Astrophysics Data System (ADS)
Zhang, T.; Zhou, L.; Tong, S.
2015-12-01
The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during the CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
Evaluation of QNI corrections in porous media applications
NASA Astrophysics Data System (ADS)
Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.
2011-09-01
Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.
Golibrzuch, Kai; Shirhatti, Pranav R.; Kandratsenka, Alexander; Wodtke, Alec M.; Bartels, Christof; Max Planck Institute for Biophysical Chemistry, Göttingen 37077 ; Rahinov, Igor; Auerbach, Daniel J.; Max Planck Institute for Biophysical Chemistry, Göttingen 37077; Department of Chemistry and Biochemistry, University of California Santa Barbara, Santa Barbara, California 93106
2014-01-28
We present a combined experimental and theoretical study of NO(v = 3 → 3, 2, 1) scattering from a Au(111) surface at incidence translational energies ranging from 0.1 to 1.2 eV. Experimentally, molecular beam–surface scattering is combined with vibrational overtone pumping and quantum-state selective detection of the recoiling molecules. Theoretically, we employ a recently developed first-principles approach, which employs an Independent Electron Surface Hopping (IESH) algorithm to model the nonadiabatic dynamics on a Newns-Anderson Hamiltonian derived from density functional theory. This approach has been successful when compared to previously reported NO/Au scattering data. The experiments presented here show that vibrational relaxation probabilities increase with incidence energy of translation. The theoretical simulations incorrectly predict high relaxation probabilities at low incidence translational energy. We show that this behavior originates from trajectories exhibiting multiple bounces at the surface, associated with deeper penetration and favored (N-down) molecular orientation, resulting in a higher average number of electronic hops and thus stronger vibrational relaxation. The experimentally observed narrow angular distributions suggest that mainly single-bounce collisions are important. Restricting the simulations by selecting only single-bounce trajectories improves agreement with experiment. The multiple bounce artifacts discovered in this work are also present in simulations employing electronic friction and even for electronically adiabatic simulations, meaning they are not a direct result of the IESH algorithm. This work demonstrates how even subtle errors in the adiabatic interaction potential, especially those that influence the interaction time of the molecule with the surface, can lead to an incorrect description of electronically nonadiabatic vibrational energy transfer in molecule-surface collisions.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT.
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation. PMID:26183058
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
Trinquier, Anne; Touboul, Mathieu; Walker, Richard J
2016-02-01
Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng. PMID:26751903
Some atmospheric scattering considerations relevant to BATSE: A model calculation
NASA Technical Reports Server (NTRS)
Young, John H.
1986-01-01
The orbiting Burst and Transient Source Experiement (BATSE) will locate gamma ray burst sources by analysis of the relative numbers of photons coming directly from a source and entering its prescribed array of detectors. In order to accurately locate burst sources it is thus necessary to identify and correct for any counts contributed by events other than direct entry by a mainstream photon. An effort is described which estimates the photon numbers which might be scattered into the BATSE detectors from interactions with the Earth atmosphere. A model was developed which yielded analytical expressions for single-scatter photon contributions in terms of source and satellite locations.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean James; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Green's Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
di Sarra, Alcide; Sferlazzo, Damiano; Meloni, Daniela; Anello, Fabrizio; Bommarito, Carlo; Corradini, Stefano; De Silvestri, Lorenzo; Di Iorio, Tatiana; Monteleone, Francesco; Pace, Giandomenico; Piacentino, Salvatore; Pugnaghi, Sergio
2015-04-01
Aerosol optical properties have been measured on the island of Lampedusa (35.5°N, 12.6°E) with seven-band multifilter rotating shadowband radiometers (MFRSRs) and a CE 318 Cimel sunphotometer (part of the AERONET network) since 1999. Four different MFRSRs have operated since 1999. The Cimel sunphotometer has been operational for a short period in 2000 and in 2003-2006 and 2010-present. Simultaneous determinations of the aerosol optical depth (AOD) from the two instruments were compared over a period of almost 4 years at several wavelengths between 415 and 870 nm. This is the first long-term comparison at a site strongly influenced by desert dust and marine aerosols and characterized by frequent cases of elevated AOD. The datasets show a good agreement, with MFRSR underestimating the Cimel AOD in cases with low Ångström exponent; the underestimate decreases for increasing wavelength and increases with AOD. This underestimate is attributed to the effect of aerosol forward scattering on the relatively wide field of view of the MFRSR. An empirical correction of the MFRSR data was implemented. After correction, the mean bias (MB) between MFRSR and Cimel simultaneous AOD determinations is always smaller than 0.004, and the root mean square difference is ≤0.031 at all wavelengths. The MB between MFRSR and Cimel monthly averages (for months with at least 20 days with AOD determinations) is 0.0052. Thus, by combining the MFRSR and Cimel observations, an integrated long-term series is obtained, covering the period 1999-present, with almost continuous measurements since early 2002. The long-term data show a small (nonstatistically significant) decreasing trend over the period 2002-2013, in agreement with independent observations in the Mediterranean. The integrated Lampedusa dataset will be used for aerosol climatological studies and for verification of satellite observations and model analyses. PMID:25967183
A High-Accurate and High-Efficient Monte Carlo Code by Improved Molière Functions with Ionization
NASA Astrophysics Data System (ADS)
Nakatsuka, Takao; Okei, Kazuhide
2003-07-01
Although the Molière theory of multiple Coulomb scattering is less accue rate in tracing solid angles than the Goudsmit and Saunderson theory due to the small angle approximation, it still acts very important roles in developments of high-efficient simulation codes of relativistic charged particles like cosmic-ray particles. Molière expansion is well explained by the physical model, that is the e normal distribution attributing to the high-frequent moderate scatterings and subsequent correction terms attributing to the additive large-angle scatterings. Based on these physical concepts, we have improved a high-accurate and highefficient Monte Carlo code taking account of ionization loss.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Sun, H; Pistorious, S
2014-08-15
Introduction: Scattered coincidences in PET are generally taken as noise, which reduces image contrast and compromises quantification. We have developed a method, with promising results, to reconstruct activity distribution from scattered PET events instead of simply correcting for them. The implementation of this method on clinical PET scanners is however limited by the currently available detector energy resolution. With low energy resolution we lose the ability to distinguish scattered coincidences from true events based on the measured photon energy. In addition the two circular arcs used to confine the source position for a scattered event cannot be accurately defined. Method: This paper presents a modification to this approach which accounts for limited energy resolution. A measured event is split into a true and a scattered component each with different probabilities based on the position of the pair of photon energies in the energy spectrum. For the scattered component, we model the photon energy with a Gaussian distribution and the upper and lower energy limits can be estimated and used to define inner and outer circular arcs to confine the source position. The true and scattered components for each measured event were reconstructed using our Generalized Scatter reconstruction algorithm. Results and Conclusion: The results show that the contrast and noise properties were improved by 6–9% and 2–4% respectively. This demonstrates that the performance of the algorithm is less sensitive to the energy resolution and that incorporating scattered photons into reconstruction brings more benefits than simply rejecting them.
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
A review of the kinetic detail required for accurate predictions of normal shock waves
NASA Technical Reports Server (NTRS)
Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.
1991-01-01
Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.
Integral method of wall interference correction in low-speed wind tunnels
NASA Technical Reports Server (NTRS)
Zhou, Changhai
1987-01-01
The analytical solution of Poisson's equation, derived form the definition of vortex, was applied to the calculation of interference velocities due to the presence of wind tunnel walls. This approach, called the Integral Method, allows an accurate evaluation of wall interference for separated or more complicated flows without the need for considering any features of the model. All the information necessary for obtaining the wall correction is contained in wall pressure measurements. The correction is not sensitive to normal data-scatter, and the computations are fast enough for on-line data processing.
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Modeling transmission and scatter for photon beam attenuators.
Ahnesjö, A; Weber, L; Nilsson, P
1995-11-01
The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop models for calculation of scatter and transmission for photon beam attenuators such as compensating filters, wedges, and block trays. The attenuation of the beam is calculated using a spectrum of the beam, and a correction factor based on attenuation measurements. Small angle coherent scatter and electron binding effects on scattering cross sections are considered by use of a correction factor. Quality changes in beam penetrability and energy fluence to dose conversion are modeled by use of the calculated primary beam spectrum after passage through the attenuator. The beam spectra are derived by the depth dose effective method, i.e., by minimizing the difference between measured and calculated depth dose distributions, where the calculated distributions are derived by superposing data from a database for monoenergetic photons. The attenuator scatter is integrated over the area viewed from the calculation point of view using first scatter theory. Calculations are simplified by replacing the energy and angular-dependent cross-section formulas with the forward scatter constant r2(0) and a set of parametrized correction functions. The set of corrections include functions for the Compton energy loss, scatter attenuation, and secondary bremsstrahlung production. The effect of charged particle contamination is bypassed by avoiding use of dmax for absolute dose calibrations. The results of the model are compared with scatter measurements in air for copper and lead filters and with dose to a water phantom for lead filters for 4 and 18 MV. For attenuated beams, downstream of the buildup region, the calculated results agree with measurements on the 1.5% level. The accuracy was slightly less in situations
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Electromagnetic wave scattering by Schwarzschild black holes.
Crispino, Luís C B; Dolan, Sam R; Oliveira, Ednilton S
2009-06-12
We analyze the scattering of a planar monochromatic electromagnetic wave incident upon a Schwarzschild black hole. We obtain accurate numerical results from the partial wave method for the electromagnetic scattering cross section and show that they are in excellent agreement with analytical approximations. The scattering of electromagnetic waves is compared with the scattering of scalar, spinor, and gravitational waves. We present a unified picture of the scattering of all massless fields for the first time. PMID:19658920
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
Artemyev, A. V.; Mourenas, D.; Krasnoselskikh, V. V.
2015-06-15
In this paper, we study relativistic electron scattering by fast magnetosonic waves. We compare results of test particle simulations and the quasi-linear theory for different spectra of waves to investigate how a fine structure of the wave emission can influence electron resonant scattering. We show that for a realistically wide distribution of wave normal angles θ (i.e., when the dispersion δθ≥0.5{sup °}), relativistic electron scattering is similar for a wide wave spectrum and for a spectrum consisting in well-separated ion cyclotron harmonics. Comparisons of test particle simulations with quasi-linear theory show that for δθ>0.5{sup °}, the quasi-linear approximation describes resonant scattering correctly for a large enough plasma frequency. For a very narrow θ distribution (when δθ∼0.05{sup °}), however, the effect of a fine structure in the wave spectrum becomes important. In this case, quasi-linear theory clearly fails in describing accurately electron scattering by fast magnetosonic waves. We also study the effect of high wave amplitudes on relativistic electron scattering. For typical conditions in the earth's radiation belts, the quasi-linear approximation cannot accurately describe electron scattering for waves with averaged amplitudes >300 pT. We discuss various applications of the obtained results for modeling electron dynamics in the radiation belts and in the Earth's magnetotail.
NASA Astrophysics Data System (ADS)
Kedziera, Dariusz; Mentel, Łukasz; Żuchowski, Piotr S.; Knoop, Steven
2015-06-01
We have obtained accurate ab initio +4Σ quartet potentials for the diatomic metastable triplet helium+alkali-metal (Li, Na, K, Rb) systems, using all-electron restricted open-shell coupled cluster singles and doubles with noniterative triples corrections CCSD(T) calculations and accurate calculations of the long-range C6 coefficients. These potentials provide accurate ab initio quartet scattering lengths, which for these many-electron systems is possible, because of the small reduced masses and shallow potentials that result in a small amount of bound states. Our results are relevant for ultracold metastable triplet helium+alkali-metal mixture experiments.
Estimation of scattered radiation in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Diaz, O.; Dance, D. R.; Young, K. C.; Elangovan, P.; Bakic, P. R.; Wells, K.
2014-08-01
Digital breast tomosynthesis (DBT) is a promising technique to overcome the tissue superposition limitations found in planar 2D x-ray mammography. However, as most DBT systems do not employ an anti-scatter grid, the levels of scattered radiation recorded within the image receptor are significantly higher than that observed in planar 2D x-ray mammography. Knowledge of this field is necessary as part of any correction scheme and for computer modelling and optimisation of this examination. Monte Carlo (MC) simulations are often used for this purpose, however they are computationally expensive and a more rapid method of calculation is desirable. This issue is addressed in this work by the development of a fast kernel-based methodology for scatter field estimation using a detailed realistic DBT geometry. Thickness-dependent scatter kernels, which were validated against the literature with a maximum discrepancy of 4% for an idealised geometry, have been calculated and a new physical parameter (air gap distance) was used to estimate more accurately the distribution of scattered radiation for a series of anthropomorphic breast phantom models. The proposed methodology considers, for the first time, the effects of scattered radiation from the compression paddle and breast support plate, which can represent more than 30% of the total scattered radiation recorded within the image receptor. The results show that the scatter field estimator can calculate scattered radiation images in an average of 80 min for projection angles up to 25° with equal to or less than a 10% error across most of the breast area when compared with direct MC simulations.
NASA Astrophysics Data System (ADS)
Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas
2016-04-01
The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient
Three-dimensional modeling of stimulated Brillouin scattering in ignition-scale experiments.
Divol, L; Berger, R L; Meezan, N B; Froula, D H; Dixit, S; Suter, L J; Glenzer, S H
2008-06-27
The first three-dimensional simulations of a high power 0.351 mum laser beam propagating through a high temperature hohlraum plasma are reported. We show that 3D fluid-based modeling of stimulated Brillouin scattering, including linear kinetic corrections, reproduces quantitatively the experimental measurements, provided it is coupled to detailed hydrodynamics simulation and a realistic description of the laser beam from its millimeter-size envelope down to the micron scale speckles. These simulations accurately predict the strong reduction of stimulated Brillouin scattering measured when polarization smoothing is used. PMID:18643667
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Accurate ab Initio Spin Densities
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921
Quantitative SPECT reconstruction using CT-derived corrections
NASA Astrophysics Data System (ADS)
Willowson, Kathy; Bailey, Dale L.; Baldock, Clive
2008-06-01
A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [99mTc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of ±7%.
A New Polyethylene Scattering Law Determined Using Inelastic Neutron Scattering
Lavelle, Christopher M; Liu, C; Stone, Matthew B
2013-01-01
Monte Carlo neutron transport codes such as MCNP rely on accurate data for nuclear physics cross-sections to produce accurate results. At low energy, this takes the form of scattering laws based on the dynamic structure factor, S (Q, E). High density polyethylene (HDPE) is frequently employed as a neutron moderator at both high and low temperatures, however the only cross-sections available are for T =300 K, and the evaluation has not been updated in quite some time. In this paper we describe inelastic neutron scattering measurements on HDPE at 5 and 300 K which are used to improve the scattering law for HDPE. We describe the experimental methods, review some of the past HDPE scattering laws, and compare computations using these models to the measured S (Q, E). The total cross-section is compared to available data, and the treatment of the carbon secondary scatterer as a free gas is assessed. We also discuss the use of the measurement itself as a scattering law via the 1 phonon approximation. We show that a scattering law computed using a more detailed model for the Generalized Density of States (GDOS) compares more favorably to this experiment, suggesting that inelastic neutron scattering can play an important role in both the development and validation of new scattering laws for Monte Carlo work.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Gamma scattering in condensed matter with high intensity Moessbauer radiation
Not Available
1990-01-01
We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is now fully operational at the University of Missouri Research Reactor (MURR) as well as a facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using scattering to filter the unwanted radiation. These have led to a new Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption (SRSA) and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to more precisely determine interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both the fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na and the satellite reflection Debye-Waller factor in TaS{sub 2}, which indicate phason rather than phonon behavior. We have begun quasielastic diffusion studies in viscous liquids and current results are summarized. These advances, coupled to our improvements in MIcrofoil Conversion Electron spectroscopy lay the foundation for the proposed research outlined in this request for a three-year renewal of DOE support.
Inelastic scattering in condensed matter with high intensity Moessbauer radiation
Yelon, W.B.; Schupp, G.
1990-10-01
We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is now fully operational at the University of Missouri Research Reactor (MURR) as well as facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using scattering to filter the unwanted radiation. These have led to a new Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption (SRSA) and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to more precisely determine interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both the fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na and the satellite reflection Debye-Waller factor in TaS{sub 2}, which indicate phason rather than phonon behavior. We have begun quasielastic diffusion studies in viscous liquids and current results are summarized. These advances, coupled to our improvements in MIcrofoil Conversion Electron spectroscopy lay the foundation for the proposed research outlined in this request for a three-year renewal of DOE support.
Universal quantification of elastic scattering effects in AES and XPS
NASA Astrophysics Data System (ADS)
Jablonski, Aleksander
1996-09-01
Elastic scattering of photoelectrons in a solid can be accounted for in the common formalism of XPS by introducing two correction factors, βeff and Qx. In the case of AES, only one correction factor, QA, is required. As recently shown, relatively simple analytical expressions for the correction factors can be derived from the kinetic Boltzmann equation within the so-called "transport approximation". The corrections are expressed here in terms of the ratio of the transport mean free path (TRMFP) to the inelastic mean free path (IMFP). Since the available data for the TRMFP are rather limited, it was decided to complete an extensive database of these values. They were calculated in the present work for the same elements and energies as in the IMFP tabulation published by Tanuma et al. An attempt has been made to derive a predictive formula providing the ratios of the TRMFP to the IMFP. Consequently, a very simple and accurate algorithm for calculating the correction factors βeff, Qx and QA has been developed. This algorithm can easily be generalized to multicomponent solids. The resulting values of the correction factors were found to compare very well with published values resulting from Monte Carlo calculations.
INTEGRATING NEPHELOMETER RESPONSE CORRECTIONS FOR BIMODAL SIZE DISTRIBUTIONS
Correction factors are calculated for obtaining true scattering extinction coefficients from integrating nephelometer measurements. The corrections are based on the bimodal representation of ambient aerosol size distributions, and take account of the effects of angular truncation...
Current relaxation due to hot carrier scattering in graphene
NASA Astrophysics Data System (ADS)
Sun, Dong; Divin, Charles; Mihnev, Momchil; Winzer, Torben; Malic, Ermin; Knorr, Andreas; Sipe, John E.; Berger, Claire; de Heer, Walt A.; First, Phillip N.; Norris, Theodore B.
2012-10-01
In this paper, we present direct time-domain investigations of the relaxation of electric currents in graphene due to hot carrier scattering. We use coherent control with ultrashort optical pulses to photoinject a current and detect the terahertz (THz) radiation emitted by the resulting current surge. We pre-inject a background of hot carriers using a separate pump pulse, with a variable delay between the pump and current-injection pulses. We find the effect of the hot carrier background is to reduce the current and hence the emitted THz radiation. The current damping is determined simply by the density (or temperature) of the thermal carriers. The experimental behavior is accurately reproduced in a microscopic theory, which correctly incorporates the nonconservation of velocity in scattering between Dirac fermions. The results indicate that hot carriers are effective in damping the current, and are expected to be important for understanding the operation of high-speed graphene electronic devices.
Scattering of slow neutrons by bound nuclei
NASA Astrophysics Data System (ADS)
Nowak, Ernst
1982-09-01
The T-operator for scattering of slow neutrons by a system of bound nuclei is calculated up to quadratic terms in the scattering length. Binding effects as well as effects of multiple scattering have to be included in order to avoid inconsistencies. For the discussion of binding effects one can adopt methods developed by Dietze and Nowak [1] for treating scattering by an elastically bound nucleus. In particular the case of coherent elastic scattering is discussed: we show how the corrections can be expressed in terms of correlation functions and that binding effects are most important for scattering by light nuclei.
NASA Astrophysics Data System (ADS)
Deng, Shaoyong; Zhang, Qi; Xia, Junying
2014-12-01
A totally self-designed experimental system based on dynamic light scattering is developed. The method of photon correlation spectroscopy is used to simulate the autocorrelation of measured scattering photons and scattering field. The dynamic autocorrelation software is self-compiled to replace the popular hardware digital correlator for much more correlation channels and much lower costs. Several inverse algorithms such as 1st-order Cumulants, 2nd-order Cumulants, NNLS, CONTIN and Double Exponents are used to compute the particle sizes and decay linewidths of both monodisperse systems and polydisperse systems. The programs based on these inverse algorithms are all self-compiled except the CONTIN. Influences of systematical parameters such as sample time, the last delay time, elapsed time, suspension's concentration and the baseline of scattering photons autocorrelation on the scattering photon counts, the autocorrelations of scattering photons and scattering field and the distribution of particle sizes are all investigated detailedly and are explained theoretically. The appropriate choices of systematical parameters are pointed out to make the experimental system more perfect. The limitations of the inverse algorithms are described and explained for the self-designed system. The methods of corrected 1st-order Cumulants and corrected Double Exponents are developed to compute particle sizes correctly at wide time scale. The particle sizes measured by the optimized experimental system are very accurate.
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
An investigation of light transport through scattering bodies with non-scattering regions.
Firbank, M; Arridge, S R; Schweiger, M; Delpy, D T
1996-04-01
Near-infra-red (NIR) spectroscopy is increasingly being used for monitoring cerebral oxygenation and haemodynamics. One current concern is the effect of the clear cerebrospinal fluid upon the distribution of light in the head. There are difficulties in modelling clear layers in scattering systems. The Monte Carlo model should handle clear regions accurately, but is too slow to be used for realistic geometries. The diffusion equation can be solved quickly for realistic geometries, but is only valid in scattering regions. In this paper we describe experiments carried out on a solid slab phantom to investigate the effect of clear regions. The experimental results were compared with the different models of light propagation. We found that the presence of a clear layer had a significant effect upon the light distribution, which was modelled correctly by Monte Carlo techniques, but not by diffusion theory. A novel approach to calculating the light transport was developed, using diffusion theory to analyze the scattering regions combined with a radiosity approach to analyze the propagation through the clear region. Results from this approach were found to agree with both the Monte Carlo and experimental data. PMID:8730669
Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.
Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C
2013-03-28
This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525
Hybrid simulation of scatter intensity in industrial cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.
2009-01-01
A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.
Rayleigh scatter in kilovoltage x-ray imaging: is the independent atom approximation good enough?
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Webb, S.
2009-11-01
Monte Carlo simulation is the gold standard method for modelling scattering processes in medical x-ray imaging. General-purpose Monte Carlo codes, however, typically use the independent atom approximation (IAA). This is known to be inaccurate for Rayleigh scattering, for many materials, in the forward direction. This work addresses whether the IAA is sufficient for the typical modelling tasks in medical kilovoltage x-ray imaging. As a means of comparison, we incorporate a more realistic 'interference function' model into a custom-written Monte Carlo code. First, we conduct simulations of scatter from isolated voxels of soft tissue, adipose, cortical bone and spongiosa. Then, we simulate scatter profiles from a cylinder of water and from phantoms of a patient's head, thorax and pelvis, constructed from diagnostic-quality CT data sets. Lastly, we reconstruct CT numbers from simulated sets of projection images and investigate the quantitative effects of the approximation. We show that the IAA can produce errors of several per cent of the total scatter, across a projection image, for typical x-ray beams and patients. The errors in reconstructed CT number, however, for the phantoms simulated, were small (typically < 10 HU). The IAA can therefore be considered sufficient for the modelling of scatter correction in CT imaging. Where accurate quantitative estimates of scatter in individual projection images are required, however, the appropriate interference functions should be included.
On simplified atmospheric correction procedures for shortwave bands of satellite images.
Song, J.; Lu, D.; Wesely, M. L.; Environmental Research; Northern Illinois Univ.; Jackson State Univ.
2003-05-01
Accurate corrections of Normalized Difference Vegetation Index (NDVI) for atmospheric effects are currently based on modeling the physical behavior of radiation as it passes through the atmosphere. An important requirement for application of the physical models is detailed information on atmospheric humidity and particles. Here, a method is described for making atmospheric corrections without the need for detailed atmospheric observations. A simplified approach for making atmospheric corrections to reflectances observed from satellites is developed by using the unique spectral signature of water pixels in satellite images. A radiative transfer model is applied to a variety of clear-sky conditions to generate functional relationships between the radiation due to the atmospheric scattering above water bodies and atmospheric radiative properties. Test cases indicate that the resulting estimates of surface reflectances and NDVI agree well with estimates made using a radiative transfer model applied independently and with measurements made at the surface.
Timebias corrections to predictions
NASA Technical Reports Server (NTRS)
Wood, Roger; Gibbs, Philip
1993-01-01
The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
Simple analytic expressions for correcting the factorizable formula for Compton
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2016-05-01
The factorizable form of the relativistic impulse approximation (RIA) expression for Compton scattering doubly differential cross sections (DDCS) becomes progressively less accurate as the binding energy of the ejected electron increases. This expression, which we call the RKJ approximation, makes it possible to obtain the Compton profile (CP) from measured DDCS. We have derived three simple analytic expressions, each which can be used to correct the RKJ error for the atomic K-shell CP obtained from DDCS for any atomic number Z. The expression which is the most general is valid over a broad range of energy ω and scattering angle θ, a second expression which is somewhat simpler is valid at very high ω but over most θ, and the third which is the simplest is valid at small θ over a broad range of ω. We demonstrate that such expressions can yield a CP accurate to within a 1% error over 99% of the electron momentum distribution range of the Uranium K-shell CP. Since the K-shell contribution dominates the extremes of the whole atom CP (this is where the error of RKJ can exceed an order of magnitude), this region can be of concern in assessing the bonding properties of molecules as well as semiconducting materials.
NASA Astrophysics Data System (ADS)
Ouyang, Wei; Mao, Weijian; Li, Xuelei; Li, Wuqun
2014-08-01
Sound velocity inversion problem based on scattering theory is formulated in terms of a nonlinear integral equation associated with scattered field. Because of its nonlinearity, in practice, linearization algorisms (Born/single scattering approximation) are widely used to obtain an approximate inversion solution. However, the linearized strategy is not congruent with seismic wave propagation mechanics in strong perturbation (heterogeneous) medium. In order to partially dispense with the weak perturbation assumption of the Born approximation, we present a new approach from the following two steps: firstly, to handle the forward scattering by taking into account the second-order Born approximation, which is related to generalized Radon transform (GRT) about quadratic scattering potential; then to derive a nonlinear quadratic inversion formula by resorting to inverse GRT. In our formulation, there is a significant quadratic term regarding scattering potential, and it can provide an amplitude correction for inversion results beyond standard linear inversion. The numerical experiments demonstrate that the linear single scattering inversion is only good in amplitude for relative velocity perturbation () of background media up to 10 %, and its inversion errors are unacceptable for the perturbation beyond 10 %. In contrast, the quadratic inversion can give more accurate amplitude-preserved recovery for the perturbation up to 40 %. Our inversion scheme is able to manage double scattering effects by estimating a transmission factor from an integral over a small area, and therefore, only a small portion of computational time is added to the original linear migration/inversion process.
Surface Consistent Finite Frequency Phase Corrections
NASA Astrophysics Data System (ADS)
Kimman, W. P.
2016-04-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray-path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency, and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the non-linear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore doesn't require fine sampling even for broadband sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Surface consistent finite frequency phase corrections
NASA Astrophysics Data System (ADS)
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Johnson, D
1940-03-22
IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections
Bootsma, G. J.; Verhaegen, F.; Jaffray, D. A.
2013-11-15
the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.
T. Horn, Y. Illieva, F. J. Klein, P. Nadel‐Turonski, R. Paremuzyan, S. Stepanyan
2011-10-01
Generalized Parton Distributions (GPDs) have become a key concept in our studies of hadron structure in QCD. The measurement of suitable experimental observables and the extraction of GPDs from these data is one of the high priority 12 GeV programs at Jefferson Lab. Deeply Virtual Compton Scattering (DVCS) is generally thought of as the most promising channel for probing GPDs in the valence quark region. However, the inverse process, Timelike Compton Scattering (TCS) can provide an important complementary measurement, in particular of the real part of the Compton amplitude and power corrections at intermediate values of Q2. The first studies of TCS using real tagged and quasi-real untagged photons were performed in Hall B at Jefferson Lab.
A novel scatter separation method for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-06-01
X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.
A novel scatter separation method for multi-energy x-ray imaging.
Sossin, A; Rebuffel, V; Tabary, J; Létang, J M; Freud, N; Verger, L
2016-06-21
X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes. PMID:27249312
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
How Accurate are SuperCOSMOS Positions?
NASA Astrophysics Data System (ADS)
Schaefer, Adam; Hunstead, Richard; Johnston, Helen
2014-02-01
Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of
Vernon, M.F.
1983-07-01
The molecular-beam technique has been used in three different experimental arrangements to study a wide range of inter-atomic and molecular forces. Chapter 1 reports results of a low-energy (0.2 kcal/mole) elastic-scattering study of the He-Ar pair potential. The purpose of the study was to accurately characterize the shape of the potential in the well region, by scattering slow He atoms produced by expanding a mixture of He in N/sub 2/ from a cooled nozzle. Chapter 2 contains measurements of the vibrational predissociation spectra and product translational energy for clusters of water, benzene, and ammonia. The experiments show that most of the product energy remains in the internal molecular motions. Chapter 3 presents measurements of the reaction Na + HCl ..-->.. NaCl + H at collision energies of 5.38 and 19.4 kcal/mole. This is the first study to resolve both scattering angle and velocity for the reaction of a short lived (16 nsec) electronic excited state. Descriptions are given of computer programs written to analyze molecular-beam expansions to extract information characterizing their velocity distributions, and to calculate accurate laboratory elastic-scattering differential cross sections accounting for the finite apparatus resolution. Experimental results which attempted to determine the efficiency of optically pumping the Li(2/sup 2/P/sub 3/2/) and Na(3/sup 2/P/sub 3/2/) excited states are given. A simple three-level model for predicting the steady-state fraction of atoms in the excited state is included.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
OPC modeling and correction solutions for EUV lithography
NASA Astrophysics Data System (ADS)
Word, James; Zuniga, Christian; Lam, Michael; Habib, Mohamed; Adam, Kostas; Oliver, Michael
2011-11-01
The introduction of EUV lithography into the semiconductor fabrication process will enable a continuation of Moore's law below the 22nm technology node. EUV lithography will, however, introduce new sources of patterning distortions which must be accurately modeled and corrected with software. Flare caused by scattered light in the projection optics result in pattern density-dependent imaging errors. The combination of non-telecentric reflective optics with reflective reticles results in mask shadowing effects. Reticle absorber materials are likely to have non-zero reflectivity due to a need to balance absorber stack height with minimization of mask shadowing effects. Depending upon placement of adjacent fields on the wafer, reflectivity along their border can result in inter-field imaging effects near the edge of neighboring exposure fields. Finally, there exists the ever-present optical proximity effects caused by diffractionlimited imaging and resist and etch process effects. To enable EUV lithography in production, it is expected that OPC will be called-upon to compensate for most of these effects. With the anticipated small imaging error budgets at sub-22nm nodes it is highly likely that only full model-based OPC solutions will have the required accuracy. The authors will explore the current capabilities of model-based OPC software to model and correct for each of the EUV imaging effects. Modeling, simulation, and correction methodologies will be defined, and experimental results of a full model-based OPC flow for EUV lithography will be presented.
Automated spatial drift correction for EFTEM image series.
Schaffer, Bernhard; Grogger, Werner; Kothleitner, Gerald
2004-12-01
Energy filtering transmission electron microscopy (EFTEM) is a widely used technique in many areas of scientific research. Image contrast in energy-filtered images arises from specific scattering events such as the ionization of atoms. By combining a set of two or more images, relative sample thickness maps or elemental distribution maps can be easily created. It is also possible to acquire a whole series of energy-filtered images to do more complex data analysis. However, whenever several images are combined to extract certain information, problems are introduced due to sample drift between the exposures. In order to obtain artifact-free information, this spatial drift has to be taken care of. Manual alignment by overlaying and shifting the images to find the best overlap is usually very accurate but extremely time consuming for larger data sets. When large amounts of images are recorded in an EFTEM series, manual correction is no longer a reasonable option. Hence, automatic routines have been developed that are mostly based on the cross-correlation algorithm. Existing routines, however, sometimes fail and again make time consuming manual adjustments necessary. In this paper we describe a new approach to the drift correction problem by incorporating a statistical treatment of the data and we present our statistically determined spatial drift (SDSD) correction program. We show its improved performance by applying it to a typical EFTEM series data block. PMID:15556698
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Device accurately measures and records low gas-flow rates
NASA Technical Reports Server (NTRS)
Branum, L. W.
1966-01-01
Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.
Reversal of photon-scattering errors in atomic qubits.
Akerman, N; Kotler, S; Glickman, Y; Ozeri, R
2012-09-01
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured. PMID:23005287
Reversal of Photon-Scattering Errors in Atomic Qubits
NASA Astrophysics Data System (ADS)
Akerman, N.; Kotler, S.; Glickman, Y.; Ozeri, R.
2012-09-01
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.
Speed-dependent collision effects on radar back-scattering from the ionosphere
NASA Technical Reports Server (NTRS)
Theimer, O.
1981-01-01
A computer code to accurately compute the fluctuation spectrum for linearly speed dependent collision frequencies was developed. The effect of ignoring the speed dependence on the estimates of ionospheric parameters was determined. It is shown that disagreements between the rocket and the incoherent scatter estimates could be partially resolved if the correct speed dependence of the i-n collision frequency is not ignored. This problem is also relevant to the study of ionospheric irregularities in the auroral E-region and their effects on the radio communication with satellites.
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam
NASA Astrophysics Data System (ADS)
Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad
2015-05-01
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293
Empirical corrections for atmospheric neutral density derived from thermospheric models
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Börger, Klaus; Henze, Christina; Löcher, Anno; Eickmans, Marius; Agena, Jens
2016-04-01
Accurately predicting satellite positions is a prerequisite for various applications from space situational awareness to precise orbit determination (POD). Given the fact that atmospheric drag represents a dominant influence on the position of low-Earth orbit objects, an accurate evaluation of thermospheric mass density is of great importance to low Earth orbital prediction. Over decades, various empirical atmospheric models have been developed to support computation of density changes within the atmosphere. The quality of these models is, however, restricted mainly due to the complexity of atmospheric density changes and the limited resolution of indices used to account for atmospheric temperature and neutral density changes caused by solar and geomagnetic activity. Satellite missions, such as Challenging Mini-Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE), provide a direct measurement of non-conservative accelerations, acting on the surface of satellites. These measurements provide valuable data for improving our knowledge of thermosphere density and winds. In this paper we present two empirical frameworks to correct model-derived neutral density simulations by the along-track thermospheric density measurements of CHAMP and GRACE. First, empirical scale factors are estimated by analyzing daily CHAMP and GRACE acceleration measurements and are used to correct the density simulation of Jacchia and MSIS (Mass-Spectrometer-Incoherent-Scatter) thermospheric models. The evolution of daily scale factors is then related to solar and magnetic activity enabling their prediction in time. In the second approach, principal component analysis (PCA) is applied to extract the dominant modes of differences between CHAMP/GRACE observations and thermospheric model simulations. Afterwards an adaptive correction procedure is used to account for long-term and high-frequency differences. We conclude the study by providing recommendations on possible
A weak-scattering model for turbine-tone haystacking
NASA Astrophysics Data System (ADS)
McAlpine, A.; Powles, C. J.; Tester, B. J.
2013-08-01
Noise and emissions are critical technical issues in the development of aircraft engines. This necessitates the development of accurate models to predict the noise radiated from aero-engines. Turbine tones radiated from the exhaust nozzle of a turbofan engine propagate through turbulent jet shear layers which causes scattering of sound. In the far-field, measurements of the tones may exhibit spectral broadening, where owing to scattering, the tones are no longer narrow band peaks in the spectrum. This effect is known colloquially as 'haystacking'. In this article a comprehensive analytical model to predict spectral broadening for a tone radiated through a circular jet, for an observer in the far field, is presented. This model extends previous work by the authors which considered the prediction of spectral broadening at far-field observer locations outside the cone of silence. The modelling uses high-frequency asymptotic methods and a weak-scattering assumption. A realistic shear layer velocity profile and turbulence characteristics are included in the model. The mathematical formulation which details the spectral broadening, or haystacking, of a single-frequency, single azimuthal order turbine tone is outlined. In order to validate the model, predictions are compared with experimental results, albeit only at polar angle equal to 90°. A range of source frequencies from 4 to 20kHz, and jet velocities from 20 to 60ms-1, are examined for validation purposes. The model correctly predicts how the spectral broadening is affected when the source frequency and jet velocity are varied.
Roy-Steiner-equation analysis of pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.
2016-04-01
We review the structure of Roy-Steiner equations for pion-nucleon scattering, the solution for the partial waves of the t-channel process ππ → N ¯ N, as well as the high-accuracy extraction of the pion-nucleon S-wave scattering lengths from data on pionic hydrogen and deuterium. We then proceed to construct solutions for the lowest partial waves of the s-channel process πN → πN and demonstrate that accurate solutions can be found if the scattering lengths are imposed as constraints. Detailed error estimates of all input quantities in the solution procedure are performed and explicit parameterizations for the resulting low-energy phase shifts as well as results for subthreshold parameters and higher threshold parameters are presented. Furthermore, we discuss the extraction of the pion-nucleon σ-term via the Cheng-Dashen low-energy theorem, including the role of isospin-breaking corrections, to obtain a precision determination consistent with all constraints from analyticity, unitarity, crossing symmetry, and pionic-atom data. We perform the matching to chiral perturbation theory in the subthreshold region and detail the consequences for the chiral convergence of the threshold parameters and the nucleon mass.
NASA Astrophysics Data System (ADS)
Brochu, Frederic M.; Joseph, James; Tomaszewski, Michal R.; Bohndiek, Sarah E.
2016-03-01
Optoacoustic Tomography is a fast developing imaging modality, combining the high resolution and penetration depth of ultrasound detection with the high contrast available from optical absorption in tissue. The spectral profile of near infrared excitation light used in optoacoustic tomography instruments is modified by absorption and scattering as it propagates deep into biological tissue. The resulting images therefore provide only qualitative insight into the distribution of tissue chromophores. Knowledge of the spectral profile of excitation light across the mouse is needed for accurate determination of the absorption coefficient in vivo. Under the conditions of constant Grueneisen parameter and accurate knowledge of the light fluence, a linear relationship should exist between the initial optoacoustic pressure amplitude and the tissue absorption coefficient. Using data from a commercial optoacoustic tomography system, we implemented an iterative optimization based on the σ-Eddington approximation to the Radiative Transfer Equation to derive a light fluence map within a given object. We segmented the images based on the positions of phantom inclusions, or mouse organs, and used known scattering coefficients for initialization. Performing the fluence correction in simple phantoms allowed the expected linear relationship between recorded and independently measured absorption coefficients to be retrieved and spectral coloring to be compensated. For in vivo data, the correction resulted in an enhancement of signal intensities in deep tissues. This improved our ability to visualize organs at depth (> 5mm). Future work will aim to perform the optimization without data normalization and explore the need for methodology that enables routine implementation for in vivo imaging.
Low Scatter, High Kilovolt, A-Si Flat Panel X-Ray Detector
NASA Astrophysics Data System (ADS)
Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.; Keating, Scott C.; Phillips, David H.; Setoodeh, Shariar
2009-03-01
We have been using amorphous silicon (a-Si) flat panel detectors in high energy (>400 kV) cone beam computed tomography (CT) applications for a number of years. We have found that these detectors have a significant amount of internal scatter that degrades the accuracy of attenuation images. The scatter errors cause cupping and streaking artifacts that are practically indistinguishable from beam hardening artifacts. Residual artifacts remain after beam hardening correction and over correction increases noise in CT reconstructions. Another important limitation of existing detectors is that they have a high failure rate, especially when operating at megavolt x-ray energies even with a well collimated beam. Due to the limitations of the current detectors, we decided to design a detector specifically for high energies that has significantly reduced scatter. In collaboration with IMTEC, we have built a prototype amorphous silicon flat panel detector that has both improved imaging response and increased lifetime. LANL's contribution is the "transparent panel concept" (patent pending), in which structures in the x-ray beam path are either eliminated or made as transparent to x-rays as practical (low atomic number and low areal density). This reduces scatter, makes attenuation measurements more accurate, improves the ability to make corrections for beam hardening, and increases signal to noise ratio in DR images and CT reconstructions. IMTEC's contribution is an improved shielding design that will increase the lifetime of the panel. Preliminary results showing the dramatic reduction in self scatter from the panel will be presented as well as the effect of this improvement on CT images.
LOW SCATTER, HIGH KILOVOLT, A-SI FLAT PANEL X-RAY DETECTOR
Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.; Keating, Scott C.; Phillips, David H.; Setoodeh, Shariar
2009-03-03
We have been using amorphous silicon (a-Si) flat panel detectors in high energy (>400 kV) cone beam computed tomography (CT) applications for a number of years. We have found that these detectors have a significant amount of internal scatter that degrades the accuracy of attenuation images. The scatter errors cause cupping and streaking artifacts that are practically indistinguishable from beam hardening artifacts. Residual artifacts remain after beam hardening correction and over correction increases noise in CT reconstructions. Another important limitation of existing detectors is that they have a high failure rate, especially when operating at megavolt x-ray energies even with a well collimated beam. Due to the limitations of the current detectors, we decided to design a detector specifically for high energies that has significantly reduced scatter. In collaboration with IMTEC, we have built a prototype amorphous silicon flat panel detector that has both improved imaging response and increased lifetime. LANL's contribution is the ''transparent panel concept''(patent pending), in which structures in the x-ray beam path are either eliminated or made as transparent to x-rays as practical (low atomic number and low areal density). This reduces scatter, makes attenuation measurements more accurate, improves the ability to make corrections for beam hardening, and increases signal to noise ratio in DR images and CT reconstructions. IMTEC's contribution is an improved shielding design that will increase the lifetime of the panel. Preliminary results showing the dramatic reduction in self scatter from the panel will be presented as well as the effect of this improvement on CT images.
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging.
Eldib, Mootaz; Bini, Jason; Robson, Philip M; Calcagno, Claudia; Faul, David D; Tsoumpas, Charalampos; Fayad, Zahi A
2015-06-21
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use. PMID:26020273
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging
NASA Astrophysics Data System (ADS)
Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.
2015-06-01
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
NASA Astrophysics Data System (ADS)
Kolomiets, Sergey; Gorelik, Andrey
Mie’s waves while sounding within coincident volumes. Being sensitive to the size of scatters, Mie’s waves can give us additional information about particle size distribution. But how about using several wavelengths corresponding to Rayleigh’s diffraction on scatters only? Can any effects be detected in such a case and what performance characteristics of the equipment are required to detect them? The deceptive simplicity of the negative answer to the first part of the question posed will disappear if one collects different definitions of Rayleigh's scattering and consider them more closely than usually. Several definitions borrowed from the introductory texts and most popular textbooks and articles can be seen as one of the reasons for the research presented in the report. Hopefully, based on the comparison of them all, anyone could easily conclude that Rayleigh's scattering has been analyzed extensively, but despite this extensive analysis made fundamental ambiguities in introductory texts are not eliminated completely to date. Moreover, there may be found unreasonably many examples on how these ambiguities have already caused an error to be foreseen, published on the one article, amplified in another one, then cited with approval in the third one, before being finally corrected. Everything indicated that in the light of all the lesions learned and based on modern experimental data, it is time to address these issues again. After the discussion of ambiguities of Rayleigh's scattering concepts, the development of the corrections to original ideas looks relatively easy. In particular, there may be distinguished at least three characteristic regions of the revised models application from the point of view of the scattered field statistical averaging. The authors of the report suggest naming them Rayleigh’s region, Einstein’s region and the region with compensations of the scattering intensity. The most important fact is that the limits of applicability of all
3-D Monte Carlo-Based Scatter Compensation in Quantitative I-131 SPECT Reconstruction
Dewaraja, Yuni K.; Ljungberg, Michael; Fessler, Jeffrey A.
2010-01-01
We have implemented highly accurate Monte Carlo based scatter modeling (MCS) with 3-D ordered subsets expectation maximization (OSEM) reconstruction for I-131 single photon emission computed tomography (SPECT). The scatter is included in the statistical model as an additive term and attenuation and detector response are included in the forward/backprojector. In the present implementation of MCS, a simple multiple window-based estimate is used for the initial iterations and in the later iterations the Monte Carlo estimate is used for several iterations before it is updated. For I-131, MCS was evaluated and compared with triple energy window (TEW) scatter compensation using simulation studies of a mathematical phantom and a clinically realistic voxel-phantom. Even after just two Monte Carlo updates, excellent agreement was found between the MCS estimate and the true scatter distribution. Accuracy and noise of the reconstructed images were superior with MCS compared to TEW. However, the improvement was not large, and in some cases may not justify the large computational requirements of MCS. Furthermore, it was shown that the TEW correction could be improved for most of the targets investigated here by applying a suitably chosen scaling factor to the scatter estimate. Finally clinical application of MCS was demonstrated by applying the method to an I-131 radioimmunotherapy (RIT) patient study. PMID:20104252
Chemical short range order and magnetic correction in liquid manganese-gallium zero alloy
NASA Astrophysics Data System (ADS)
Grosdidier, B.; Ben Abdellah, A.; Osman, S. M.; Ataati, J.; Gasser, J. G.
2015-12-01
The Mn66Ga34 alloy at this particular composition is known to be zero alloy in which the linear combination of the two neutron scattering lengths weighted by the atomic compositions vanish. Thus for this specific concentration, the effect of the partial structure factors SNN and SNC is cancelled by a weighted term, which value is zero. Then the measured total structure factor S(q) gives directly the concentration-concentration structure factor SCC(q). We present here the first experimental results of neutron diffraction on the Mn66Ga34 "null matrix alloy" at 1050 °C. The main peak of the experimental SCC(q) gives a strong evidence of a hetero-atomic chemical order in this coordinated alloy. This order also appears in real space radial distribution function which is calculated by the Fourier transform of the structure factor. The degree of hetero-coordination is discussed together with other manganese-polyvalent alloys. However manganese also shows abnormal magnetic scattering in the alloy structure factor which must be corrected. This correction gives an experimental information on the mean effective spin of manganese in this liquid alloy. We present the first critical theoretical calculations of the magnetic correction factor in Mn-Ga zero-alloy based on our accurate experimental measurements of SCC(q).
Optical computed tomography of radiochromic gels for accurate three-dimensional dosimetry
NASA Astrophysics Data System (ADS)
Babic, Steven
In this thesis, three-dimensional (3-D) radiochromic Ferrous Xylenol-orange (FX) and Leuco Crystal Violet (LCV) micelles gels were imaged by laser and cone-beam (Vista(TM)) optical computed tomography (CT) scanners. The objective was to develop optical CT of radiochromic gels for accurate 3-D dosimetry of intensity-modulated radiation therapy (IMRT) and small field techniques used in modern radiotherapy. First, the cause of a threshold dose response in FX gel dosimeters when scanned with a yellow light source was determined. This effect stems from a spectral sensitivity to multiple chemical complexes that are at different dose levels between ferric ions and xylenol-orange. To negate the threshold dose, an initial concentration of ferric ions is needed in order to shift the chemical equilibrium so that additional dose results in a linear production of a coloured complex that preferentially absorbs at longer wavelengths. Second, a low diffusion leuco-based radiochromic gel consisting of Triton X-100 micelles was developed. The diffusion coefficient of the LCV micelle gel was found to be minimal (0.036 + 0.001 mm2 hr-1 ). Although a dosimetric characterization revealed a reduced sensitivity to radiation, this was offset by a lower auto-oxidation rate and base optical density, higher melting point and no spectral sensitivity. Third, the Radiological Physics Centre (RPC) head-and-neck IMRT protocol was extended to 3-D dose verification using laser and cone-beam (Vista(TM)) optical CT scans of FX gels. Both optical systems yielded comparable measured dose distributions in high-dose regions and low gradients. The FX gel dosimetry results were crossed checked against independent thermoluminescent dosimeter and GAFChromicRTM EBT film measurements made by the RPC. It was shown that optical CT scanned FX gels can be used for accurate IMRT dose verification in 3-D. Finally, corrections for FX gel diffusion and scattered stray light in the Vista(TM) scanner were developed to
Kepler Predictor-Corrector Algorithm: Scattering Dynamics with One-Over-R Singular Potentials.
Markmann, Andreas; Graziani, Frank; Batista, Victor S
2012-01-10
An accurate and efficient algorithm for dynamics simulations of particles with attractive 1/r singular potentials is introduced. The method is applied to semiclassical dynamics simulations of electron-proton scattering processes in the Wigner-transform time-dependent picture, showing excellent agreement with full quantum dynamics calculations. Rather than avoiding the singularity problem by using a pseudopotential, the algorithm predicts the outcome of close-encounter two-body collisions for the true 1/r potential by solving the Kepler problem analytically and corrects the trajectory for multiscattering with other particles in the system by using standard numerical techniques (e.g., velocity Verlet, or Gear Predictor corrector algorithms). The resulting integration is time-reversal symmetric and can be applied to the general multibody dynamics problem featuring close encounters as occur in electron-ion scattering events, in particle-antiparticle dynamics, as well as in classical simulations of charged interstellar gas dynamics and gravitational celestial mechanics. PMID:26592868
Compton scattering S matrix and cross section in strong magnetic field
NASA Astrophysics Data System (ADS)
Mushtukov, Alexander A.; Nagirner, Dmitrij I.; Poutanen, Juri
2016-05-01
Compton scattering of polarized radiation in a strong magnetic field is considered. The recipe for calculation of the scattering matrix elements, the differential and total cross sections based on quantum electrodynamic second-order perturbation theory is presented for the case of arbitrary initial and final Landau level, electron momentum along the field and photon momentum. Photon polarization and electron spin state are taken into account. The correct dependence of natural Landau level width on the electron spin state is taken into account in a general case of arbitrary initial photon momentum for the first time. A number of steps in the calculations were simplified analytically making the presented recipe easy to use. The redistribution functions over the photon energy, momentum and polarization states are presented and discussed. The paper generalizes already known results and offers a basis for the accurate calculation of radiation transfer in a strong B field, for example, in strongly magnetized neutron stars.
Positron scattering from vinyl acetate
NASA Astrophysics Data System (ADS)
Chiari, L.; Zecca, A.; Blanco, F.; García, G.; Brunger, M. J.
2014-09-01
Using a Beer-Lambert attenuation approach, we report measured total cross sections (TCSs) for positron scattering from vinyl acetate (C4H6O2) in the incident positron energy range 0.15-50 eV. In addition, we also report an independent atom model with screening corrected additivity rule computation results for the TCSs, differential and integral elastic cross sections, the positronium formation cross section and inelastic integral cross sections. The energy range of these calculations is 1-1000 eV. While there is a reasonable qualitative correspondence between measurement and calculation for the TCSs, in terms of the energy dependence of those cross sections, the theory was found to be a factor of ˜2 larger in magnitude at the lower energies, even after the measured data were corrected for the forward angle scattering effect.
The prediction of Neutron Elastic Scattering from Tritium for E(n) = 6-14 MeV
Anderson, J D; Dietrich, F S; Luu, T; McNabb, D P; Navratil, P; Quaglioni, S
2010-06-14
In a recent report Navratil et al. evaluated the angle-integrated cross section and the angular distribution for 14-MeV n+T elastic scattering by inferring these cross sections from accurately measured p+3He angular distributions. This evaluation used a combination of two theoretical treatments, based on the no-core shell model and resonating-group method (NCSM/RGM) and on the R-matrix formalism, to connect the two charge-symmetric reactions n+T and p+{sup 3}He. In this report we extend this treatment to cover the neutron incident energy range 6-14 MeV. To do this, we evaluate angle-dependent correction factors for the NCSM/RGM calculations so that they agree with the p+{sup 3}He data near 6 MeV, and using the results found earlier near 14 MeV we interpolate these correction factors to obtain correction factors throughout the 6-14 MeV energy range. The agreement between the corrected NCSM/RGM and R-Matrix values for the integral elastic cross sections is excellent ({+-}1%), and these are in very good agreement with total cross section experiments. This result can be attributed to the nearly constant correction factors at forward angles, and to the evidently satisfactory physics content of the two calculations. The difference in angular shape, obtained by comparing values of the scattering probability distribution P({mu}) vs. {mu}(the cosine of the c.m. scattering angle), is about {+-}4% and appears to be related to differences in the two theoretical calculations. Averaging the calculations yields P({mu}) values with errors of {+-}2 1/2 % or less. These averaged values, along with the corresponding quantities for the differential cross sections, will form the basis of a new evaluation of n+T elastic scattering. Computer files of the results discussed in this report will be supplied upon request.
Direct Calculation of the Scattering Amplitude Without Partial Wave Analysis
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Two new developments in scattering theory are reported. We show, in a practical way, how one can calculate the full scattering amplitude without invoking a partial wave expansion. First, the integral expression for the scattering amplitude f(theta) is simplified by an analytic integration over the azimuthal angle. Second, the full scattering wavefunction which appears in the integral expression for f(theta) is obtained by solving the Schrodinger equation with the finite element method (FEM). As an example, we calculate electron scattering from the Hartree potential. With minimal computational effort, we obtain accurate and stable results for the scattering amplitude.
Effect of spatial behavior of scatter on 3D PET
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Pei, Cheng-Chih
1997-05-01
In 3D positron emission tomography (PET), all the coincidence events can be collected to increase the sensitivity of signal detection. However, the sensitivity increase results in the enlargement of scatter fraction which degrades image quality. For improving the accuracy of PET images, an effective scatter correction technique is necessary. In this paper, Monte Carlo simulations were done according to the system configuration of the animal PET design at the Institute of Nuclear Energy Research. From the simulation data we could understand what the scatter effect of our planned system will be. The convolution-subtraction method was chosen to correct for the scatter. A new approach to determine the scatter kernel function which could do better job on scatter correction will be presented.
Radioactive smart probe for potential corrected matrix metalloproteinase imaging.
Huang, Chiun-Wei; Li, Zibo; Conti, Peter S
2012-11-21
Although various activatable optical probes have been developed to visualize metalloproteinase (MMP) activities in vivo, precise quantification of the enzyme activity is limited due to the inherent scattering and attenuation (limited depth penetration) properties of optical imaging. In this investigation, a novel activatable peptide probe (64)Cu-BBQ650-PLGVR-K(Cy5.5)-E-K(DOTA)-OH was constructed to detect tumor MMP activity in vivo. This agent is optically quenched in its native form, but releases strong fluorescence upon cleavage by selected enzymes. MMP specificity was confirmed both in vitro and in vivo by fluorescent imaging studies. The use of a single modality to image biomarkers/processes may lead to erroneous interpretation of imaging data. The introduction of a quantitative imaging modality, such as PET, would make it feasible to correct the enzyme activity determined from optical imaging. In this proof of principle report, we demonstrated the feasibility of correcting the activatable optical imaging data through the PET signal. This approach provides an attractive new strategy for accurate imaging of MMP activity, which may also be applied for other protease imaging. PMID:23025637
Tc99m/T1201 cross-talk corrections on a dedicated cardiac CZT SPECT camera
NASA Astrophysics Data System (ADS)
Chiasson, Stephanie
Single Photon Emission Computed Tomography (SPECT) is a standard method for evaluating heart disease. A new dedicated cardiac camera with CZT detectors offers improved energy resolution and sensitivity compared to standard SPECT systems. Simultaneous Tc99m/T1201 protocols are fast, but correction for cross-talk between isotopes is necessary to achieve good image quality. The Triple-Energy-Window (TEW) correction method is easy to implement and provides accurate scatter estimation in single-isotope studies. We retrospectively assessed the cross-talk correction using clinically acquired single-isotope studies: 52 T1201 studies and 52 Tc99m-tetrofosmin studies, matched by gender and BMI. Projection data from Tl-stress and Tc-rest studies were combined to create contaminated data before reconstruction. TEW corrections were evaluated in both primary energy windows. Modifications to the corrections were required. The modified approach results in residual cross-talk as low as 2% but high noise levels were present in the corrected images. Further modifications are needed to reduce noise.
Multiple scattering in the remote sensing of natural surfaces
Li, Wen-Hao; Weeks, R.; Gillespie, A.R.
1996-07-01
Radiosity models predict the amount of light scattered many times (multiple scattering) among scene elements in addition to light interacting with a surface only once (direct reflectance). Such models are little used in remote sensing studies because they require accurate digital terrain models and, typically, large amounts of computer time. We have developed a practical radiosity model that runs relatively quickly within suitable accuracy limits, and have used it to explore problems caused by multiple-scattering in image calibration, terrain correction, and surface roughness estimation for optical images. We applied the radiosity model to real topographic surfaces sampled at two very different spatial scales: 30 m (rugged mountains) and 1 cm (cobbles and gravel on an alluvial fan). The magnitude of the multiple-scattering (MS) effect varies with solar illumination geometry, surface reflectivity, sky illumination and surface roughness. At the coarse scale, for typical illumination geometries, as much as 20% of the image can be significantly affected (>5%) by MS, which can account for as much as {approximately}10% of the radiance from sunlit slopes, and much more for shadowed slopes, otherwise illuminated only by skylight. At the fine scale, radiance from as much as 30-40% of the scene can have a significant MS component, and the MS contribution is locally as high as {approximately}70%, although integrating to the meter scale reduces this limit to {approximately}10%. Because the amount of MS increases with reflectivity as well as roughness, MS effects will distort the shape of reflectance spectra as well as changing their overall amplitude. The change is proportional to surface roughness. Our results have significant implications for determining reflectivity and surface roughness in remote sensing.
de Regt, J.M.; Engeln, R.A.H.; de Groote, F.P.J.; van der Mullen, J.A.M.; Schram, D.C.
1995-05-01
A new calibration method to obtain the electron density from Thomson scattering on an inductively coupled plasma is discussed. Raman scattering of nitrogen is used for recovering the Rayleigh scattering signal. This has the advantage that no corrections are necessary for stray light, like with other calibration methods, using the direct measured Rayleigh scattering signal on a well-known gas. It is shown that electron densities and electron temperatures can be measured with an accuracy of about 15% in density and of about 150 K in temperature. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.
Stimulated rotational Raman scattering
NASA Astrophysics Data System (ADS)
Parazzoli, C. G.; Rafanelli, G. L.; Capps, D. M.; Drutman, C.
1989-03-01
The effect of Stimulated Rotational Raman Scattering (SRRS) processes on high energy laser directed energy weapon systems was studied. The program had 3 main objectives; achieving an accurate description of the physical processes involved in SRRS; developing a numerical algorithm to confidently evaluate SRRS-induced losses in the propagation of high energy laser beams in the uplink and downlink segments of the optical trains of various strategic defense system scenarios; and discovering possible methods to eliminate, or at least reduce, the deleterious effects of SRRS on the energy deposition on target. The following topics are discussed: the motivation for the accomplishments of the DOE program; the Semiclassical Theory of Non-Resonant SRRS for Diatomic Homonuclear Molecules; and then the following appendices; Calculation of the Dipole Transition Reduced Matrix Element, Guided Tour of Hughes SRRS Code, Running the Hughes SRRS Code, and Hughes SRRS Code Listing.
Accounting for aerosol scattering in the CLARS retrieval of column averaged CO2 mixing ratios
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Natraj, Vijay; Li, King-Fai; Shia, Run-Lie; Fu, Dejian; Pongetti, Thomas J.; Sander, Stanley P.; Roehl, Coleen M.; Yung, Yuk L.
2015-07-01
The California Laboratory for Atmospheric Remote Sensing Fourier transform spectrometer (CLARS-FTS) deployed at Mount Wilson, California, has been measuring column abundances of greenhouse gases in the Los Angeles (LA) basin in the near-infrared spectral region since August 2011. CLARS-FTS measures reflected sunlight and has high sensitivity to absorption and scattering in the boundary layer. In this study, we estimate the retrieval biases caused by aerosol scattering and present a fast and accurate approach to correct for the bias in the CLARS column averaged CO2 mixing ratio product, XCO2. The high spectral resolution of 0.06 cm-1 is exploited to reveal the physical mechanism for the bias. We employ a numerical radiative transfer model to simulate the impact of neglecting aerosol scattering on the CO2 and O2 slant column densities operationally retrieved from CLARS-FTS measurements. These simulations show that the CLARS-FTS operational retrieval algorithm likely underestimates CO2 and O2 abundances over the LA basin in scenes with moderate aerosol loading. The bias in the CO2 and O2 abundances due to neglecting aerosol scattering cannot be canceled by ratioing each other in the derivation of the operational product of XCO2. We propose a new method for approximately correcting the aerosol-induced bias. Results for CLARS XCO2 are compared to direct-Sun XCO2 retrievals from a nearby Total Carbon Column Observing Network (TCCON) station. The bias-correction approach significantly improves the correlation between the XCO2 retrieved from CLARS and TCCON, demonstrating that this approach can increase the yield of useful data from CLARS-FTS in the presence of moderate aerosol loading.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
K-corrections and extinction corrections for Type Ia supernovae
Nugent, Peter; Kim, Alex; Perlmutter, Saul
2002-05-21
The measurement of the cosmological parameters from Type Ia supernovae hinges on our ability to compare nearby and distant supernovae accurately. Here we present an advance on a method for performing generalized K-corrections for Type Ia supernovae which allows us to compare these objects from the UV to near-IR over the redshift range 0 < z < 2. We discuss the errors currently associated with this method and how future data can improve upon it significantly. We also examine the effects of reddening on the K-corrections and the light curves of Type Ia supernovae. Finally, we provide a few examples of how these techniques affect our current understanding of a sample of both nearby and distant supernovae.
Thomson scattering from laser plasmas
Moody, J D; Alley, W E; De Groot, J S; Estabrook, K G; Glenzer, S H; Hammer, J H; Jadaud, J P; MacGowan, B J; Rozmus, W; Suter, L J; Williams, E A
1999-01-12
Thomson scattering has recently been introduced as a fundamental diagnostic of plasma conditions and basic physical processes in dense, inertial confinement fusion plasmas. Experiments at the Nova laser facility [E. M. Campbell et al., Laser Part. Beams 9, 209 (1991)] have demonstrated accurate temporally and spatially resolved characterization of densities, electron temperatures, and average ionization levels by simultaneously observing Thomson scattered light from ion acoustic and electron plasma (Langmuir) fluctuations. In addition, observations of fast and slow ion acous- tic waves in two-ion species plasmas have also allowed an independent measurement of the ion temperature. These results have motivated the application of Thomson scattering in closed-geometry inertial confinement fusion hohlraums to benchmark integrated radiation-hydrodynamic modeling of fusion plasmas. For this purpose a high energy 4{omega} probe laser was implemented recently allowing ultraviolet Thomson scattering at various locations in high-density gas-filled hohlraum plasmas. In partic- ular, the observation of steep electron temperature gradients indicates that electron thermal transport is inhibited in these gas-filled hohlraums. Hydrodynamic calcula- tions which include an exact treatment of large-scale magnetic fields are in agreement with these findings. Moreover, the Thomson scattering data clearly indicate axial stagnation in these hohlraums by showing a fast rise of the ion temperature. Its timing is in good agreement with calculations indicating that the stagnating plasma will not deteriorate the implosion of the fusion capsules in ignition experiments.
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
Accurate and Precise Zinc Isotope Ratio Measurements in Urban Aerosols
NASA Astrophysics Data System (ADS)
Weiss, D.; Gioia, S. M. C. L.; Coles, B.; Arnold, T.; Babinski, M.
2009-04-01
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of δ66Zn determinations in aerosols is around 0.05 per mil per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in δ66Zn ranging between -0.96 and -0.37 per mil in coarse and between -1.04 and 0.02 per mil in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source.
Accurate Fiber Length Measurement Using Time-of-Flight Technique
NASA Astrophysics Data System (ADS)
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
Scalar scattering via conformal higher spin exchange
NASA Astrophysics Data System (ADS)
Joung, Euihun; Nakach, Simon; Tseytlin, Arkady A.
2016-02-01
Theories containing infinite number of higher spin fields require a particular definition of summation over spins consistent with their underlying symmetries. We consider a model of massless scalars interacting (via bilinear conserved currents) with conformal higher spin fields in flat space. We compute the tree-level four-scalar scattering amplitude using a natural prescription for summation over an infinite set of conformal higher spin exchanges and find that it vanishes. Independently, we show that the vanishing of the scalar scattering amplitude is, in fact, implied by the global conformal higher spin symmetry of this model. We also discuss one-loop corrections to the four-scalar scattering amplitude.
77 FR 72199 - Technical Corrections; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899... . SUPPLEMENTARY INFORMATION: On July 6, 2012 (77 FR 39899), the NRC published a final rule in the Federal Register... typographical and spelling errors, and making other edits and conforming changes. This correcting amendment...
Rx for Pedagogical Correctness: Professional Correctness.
ERIC Educational Resources Information Center
Lasley, Thomas J.
1993-01-01
Describes the difficulties caused by educators holding to a view of teaching that assumes that there is one "pedagogically correct" way of running a classroom. Provides three examples of harmful pedagogical correctness ("untracked" classes, cooperative learning, and testing and test-wiseness). Argues that such dogmatic views of education limit…
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.
ERIC Educational Resources Information Center
Zuckerman, June Trop
This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
Beyond the Kirchhoff approximation. II - Electromagnetic scattering
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1991-01-01
In a paper by Rodriguez (1981), the momentum transfer expansion was introduced for scalar wave scattering. It was shown that this expansion can be used to obtain wavelength-dependent curvature corrections to the Kirchhoff approximation. This paper extends the momentum transfer perturbation expansion to electromagnetic waves. Curvature corrections to the surface current are obtained. Using these results, the specular field and the backscatter cross section are calculated.
Classical scattering in strongly attractive potentials
NASA Astrophysics Data System (ADS)
Khrapak, S. A.
2014-03-01
Scattering in central attractive potentials is investigated systematically, in the limit of strong interaction, when large-angle scattering dominates. In particular, three important model interactions (Lennard-Jones, Yukawa, and exponential), which are qualitatively different from each other, are studied in detail. It is shown that for each of these interactions the dependence of the scattering angle on the properly normalized impact parameter exhibits a quasiuniversal behavior. This implies simple scaling of the transport cross sections with energy in the considered limit. Accurate fits for the momentum transfer cross section are suggested. Applications of the obtained results are discussed.
Classical scattering in strongly attractive potentials.
Khrapak, S A
2014-03-01
Scattering in central attractive potentials is investigated systematically, in the limit of strong interaction, when large-angle scattering dominates. In particular, three important model interactions (Lennard-Jones, Yukawa, and exponential), which are qualitatively different from each other, are studied in detail. It is shown that for each of these interactions the dependence of the scattering angle on the properly normalized impact parameter exhibits a quasiuniversal behavior. This implies simple scaling of the transport cross sections with energy in the considered limit. Accurate fits for the momentum transfer cross section are suggested. Applications of the obtained results are discussed. PMID:24730827
Molecular differential cross sections for low angle photon scattering in tissues
NASA Astrophysics Data System (ADS)
Tartari, Agostino
1999-08-01
Measurements of molecular cross sections of coherently scattered photons were obtained by means of powder diffraction data analysis in the interval χ=0-6.4 nm -1 ( χ=sin( θ/2)/ λ; where θ is the scattering angle and λ the incident wavelength in units of nm). Accurate correction procedures were applied to the raw diffraction data. Data for fat and PMMA (polymethyl methacrylate)—reported in a previous analysis (Tartari A, Casnati E, Bonifazzi C, Baraldi C, 1997b. Phys. Med. Biol. 42, 2551-2560.—were found to agree quite well when compared to the results obtained with different quality of beams and analysis techniques. Investigation on bony tissue is presented for the first time, and a simple model has been carried out in order to segment the mineral and non-mineral components. Finally, a basic set of curves for the linear differential scattering coefficient is proposed in order to simulate photons scattering by tissue in terms of linear combination of such curves.
Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E
2014-11-01
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios. PMID:25430162
Mirnov, V. V.; Hartog, D. J. Den; Duff, J.; Parke, E.; Brower, D. L.; Ding, W. X.
2014-11-15
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Transport corrections for (n,2n) reactions
Shmakov, V.M.
1994-12-31
As a rule, multigroup Monte Carlo codes are written so that they can process standard group data used in discrete ordinates codes. In review methods of sampling the secondary neutron direction used in multigroup Monte Carlo codes are described. Presented in that work, the direct sampling from the truncated Legendre expansion of angular distribution is used for scattering (N,N) reactions where number of secondary neutrons is equal to unity. In anisotropic multiplying reactions like (N,2N) arises the question about number of secondary neutrons. This question is turned out to be connected with the truncation of Legendre polynomial expansion of the scattering distribution and introducing of transport corrections.
Atom-molecule scattering with the average wavefunction method
NASA Astrophysics Data System (ADS)
Singh, Harjinder; Dacol, Dalcio K.; Rabitz, Herschel
1987-08-01
The average wavefunction method (AWM) is applied to atom-molecule scattering. In its simplest form the labor involved in solving the AWM equations is equivalent to that involved for elastic scattering in the same formulation. As an initial illustration, explicit expressions for the T-matrix are derived for the scattering of an atom and a rigid rotor. Results are presented for low-energy scattering and corrections to the Born approximation are clearly evident. In general, the AWM is particularly suited to polyatom scattering due to its reduction of the potential in terms of a separable atom-atom potential.
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Malcolm, A. A.; McBride, J. W.
2016-01-01
Scattered radiation and beam hardening introduce artefacts that degrade the quality of data in x-ray computed tomography (CT). It is unclear how these artefacts influence dimensional measurements evaluated from CT data. Understanding and quantifying the influence of these artefacts on dimensional measurements is required to evaluate the uncertainty of CT-based dimensional measurements. In this work the influence of scatter and beam hardening on dimensional measurements is investigated using the beam stop array scatter correction method and spectrum pre-filtration for the measurement of an object with internal and external cylindrical dimensional features. Scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, a gradient-based surface determination method is found to be robust to the influence of artefacts and leads to more accurate dimensional measurements than those evaluated using the ISO50 method. In addition to these observations the GUM method for evaluating standard measurement uncertainties is applied and the standard measurement uncertainty due to scatter and beam hardening is estimated.
Erguel, Ozguer; Guerel, Levent
2008-12-01
We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.
Further corrections to the theory of cosmological recombination
NASA Technical Reports Server (NTRS)
Krolik, Julian H.
1990-01-01
Krolik (1989) pointed out that frequency redistribution due to scattering is more important than cosmological expansion in determining the Ly-alpha frequency profile during cosmological recombination, and that its effects substantially modify the rate of recombination. Although the first statement is true, the second statement is not: a basic symmetry of photon scattering leads to identical cancellations which almost completely erase the effects of both coherent and incoherent scattering. Only a small correction due to atomic recoil alters the line profile from the prediction of pure cosmological expansion, so that the pace of cosmological recombination can be well approximated by ignoring Ly-alpha scattering.
The method of Gaussian weighted trajectories. III. An adiabaticity correction proposal
Bonnet, L.
2008-01-28
The addition of an adiabaticity correction (AC) to the Gaussian weighted trajectory (GWT) method and its normalized version (GWT-N) is suggested. This correction simply consists in omitting vibrationally adiabatic nonreactive trajectories in the calculations of final attributes. For triatomic exchange reactions, these trajectories satisfy the criterion {omega} not much larger than ({Dirac_h}/2{pi}), where {omega} is a vibrational action defined by {omega}={integral}{sup []}-[]dt(pr-p{sub 0}r{sub 0}), r being the reagent diatom bond length, p its conjugate momentum, and r{sub 0} and p{sub 0} the corresponding variables for the unperturbed diatom ({omega}/({Dirac_h}/2{pi}) bears some analogy with the semiclassical elastic scattering phase shift). The resulting GWT-AC and GWT-ACN methods are applied to the recently studied H{sup +}+H{sub 2} and H{sup +}+D{sub 2} reactions and the agreement between their predictions and those of exact quantum scattering calculations is found to be much better than for the initial GWT and GWT-N methods. The GWT-AC method, however, appears to be the most accurate one for the processes considered, in particular, the H{sup +}+D{sub 2} reaction.
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLO_{sat}, yield accurate binding energies and radii of nuclei up to ^{40}Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective J^{π}=3^{-} states in ^{16}O and ^{40}Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.
A method to correct for spectral artifacts in optical-CT dosimetry
Pierquet, Michael; Jordan, Kevin; Oldham, Mark
2011-01-01
The recent emergence of radiochromic dosimeters with low inherent light-scattering presents the possibility of fast 3D dosimetry using broad-beam optical computed tomography (optical-CT). Current broad beam scanners typically employ either a single or a planar array of light-emitting diodes (LED) for the light source. The spectrum of light from LED sources is polychromatic and this, in combination with the non-uniform spectral absorption of the dosimeter, can introduce spectral artifacts arising from preferential absorption of photons at the peak absorption wavelengths in the dosimeter. Spectral artifacts can lead to large errors in the reconstructed attenuation coefficients, and hence dose measurement. This work presents an analytic method for correcting for spectral artifacts which can be applied if the spectral characteristics of the light source, absorbing dosimeter, and imaging detector are known or can be measured. The method is implemented here for a PRESAGE® dosimeter scanned with the DLOS telecentric scanner (Duke Large field-of-view Optical-CT Scanner). Emission and absorption profiles were measured with a commercial spectrometer and spectrophotometer, respectively. Simulations are presented that show spectral changes can introduce errors of 8% for moderately attenuating samples where spectral artifacts are less pronounced. The correction is evaluated by application to a 16 cm diameter PRESAGE® cylindrical dosimeter irradiated along the axis with two partially overlapping 6 × 6 cm fields of different doses. The resulting stepped dose distribution facilitates evaluation of the correction as each step had different spectral contributions. The spectral artifact correction was found to accurately correct the reconstructed coefficients to within ~1.5%, improved from ~7.5%, for normalized dose distributions. In conclusion, for situations where spectral artifacts cannot be removed by physical filters, the method shown here is an effective correction. Physical
Eyeglasses for Vision Correction
... Stories Español Eye Health / Glasses & Contacts Eyeglasses for Vision Correction Dec. 12, 2015 Wearing eyeglasses is an easy way to correct refractive errors. Improving your vision with eyeglasses offers the opportunity to select from ...
Scattering of Ions beyond the Single Scattering Critical Angle in HIERDA
Johnston, P.N.; Bubb, I.F.; Franich, R.; Cohen, D.D.; Dytlewski, N.; Arstila, K.; Sajavaara, T.
2003-08-26
In Heavy Ion Elastic Recoil Detection Analysis (HIERDA), Rutherford scattering determines the number of scattered and recoiled ions that reach the detector. Because plural scattering is a major contributor to the spectrum and can mask important features and otherwise distort the spectrum it needs to be described correctly. Scattering more than once is a frequent occurrence so many ions scatter beyond the maximum scattering angle possible by a single scattering event. In this work we have chosen projectile/target combinations which enable the exploitation of the scattering critical angle to obtain spectra which are from ions which have all been scattered more than once. Monte Carlo simulation of the ion transport is used to study the plural scattering using a fast FORTRAN version of TRIM. The results of the simulations are compared with experimental measurements on samples of Si, V and Co performed with 20-100 MeV beams of Br, I and Au ions using ToF-E HIERDA facilities at Lucas Heights and Helsinki.
A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters
NASA Astrophysics Data System (ADS)
Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea
2012-10-01
New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.
Accurate skin dose measurements using radiochromic film in clinical applications
Devic, S.; Seuntjens, J.; Abdel-Rahman, W.; Evans, M.; Olivares, M.; Podgorsak, E.B.; Vuong, Te; Soares, Christopher G.
2006-04-15
Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 {mu}. We used the new GAFCHROMIC[reg] dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 {mu}. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 {mu} to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10x10 cm{sup 2} increases from 14% to 43%. For the three GAFCHROMIC[reg] dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC[reg] films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC[reg] film model. Finally, a procedure that uses EBT model GAFCHROMIC[reg] film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.
Research in Correctional Rehabilitation.
ERIC Educational Resources Information Center
Rehabilitation Services Administration (DHEW), Washington, DC.
Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…
Quasi-elastic nuclear scattering at high energies
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.
1992-01-01
The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.
Electro-optic contribution to Raman scattering from alkali halides
Mahan, G.D.; Subbaswamy, K.R.
1986-06-15
The electro-optic contributions to second-order Raman scattering and field-induced first-order scattering from alkali halides are calculated explicitly in terms of the ionic hyperpolarizability coefficients. The relevant local-field corrections are evaluated. Illustrative numerical results are presented.
Multiphonon scattering from surfaces
NASA Astrophysics Data System (ADS)
Manson, J. R.; Celli, V.; Himes, D.
1994-01-01
We consider the relationship between several different formalisms for treating the multiphonon inelastic scattering of atomic projectiles from surfaces. Starting from general principles of formal scattering theory, the trajectory approximation to the scattering intensity is obtained. From the trajectory approximation, the conditions leading to the fast-collision approximation for multiquantum inelastic scattering are systematically derived.
Benchmarking accurate spectral phase retrieval of single attosecond pulses
NASA Astrophysics Data System (ADS)
Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.
2015-02-01
A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.
Compton scattering with low intensity radioactive sources
NASA Astrophysics Data System (ADS)
Quarles, Carroll
2012-03-01
Compton scattering experiments with gamma rays typically require a ``hot'' source (˜5mCi of Cs137) to observe the scattering as a function of angle. (See Ortec AN34 Experiment #10 Compton Scattering) Here a way is described to investigate Compton scattering with micro Curie level radioactive sources that are more commonly available in the undergraduate laboratory. A vertical-looking 2 inch coaxial hpGe detector, collimated with a 2 inch thick lead shield, was used. Cylindrical Al targets of various thicknesses were placed over the collimator and several available sources were placed around the target so that the average Compton scattering angle into the collimator was 90 deg. A peak could be observed at the expected energy for 90 deg. Compton scattering by doing 24 hour target-in minus target-out runs. The peak was broadened by the spread in the scattering angle due to the variation in the angle of the incoming gamma ray and the angular acceptance of the collimator. A rough analysis can be done by modeling the angular spread due to the geometry and correcting for the gamma ray absorption from the target center. Various target materials and sources can be used and some variation in average Compton scattering angle can be obtained by adjusting the geometry of the source and target.
Atmospheric monitoring in MAGIC and data corrections
NASA Astrophysics Data System (ADS)
Fruck, Christian; Gaug, Markus
2015-03-01
A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.
Theory of Graphene Raman Scattering.
Heller, Eric J; Yang, Yuan; Kocia, Lucas; Chen, Wei; Fang, Shiang; Borunda, Mario; Kaxiras, Efthimios
2016-02-23
Raman scattering plays a key role in unraveling the quantum dynamics of graphene, perhaps the most promising material of recent times. It is crucial to correctly interpret the meaning of the spectra. It is therefore very surprising that the widely accepted understanding of Raman scattering, i.e., Kramers-Heisenberg-Dirac theory, has never been applied to graphene. Doing so here, a remarkable mechanism we term"transition sliding" is uncovered, explaining the uncommon brightness of overtones in graphene. Graphene's dispersive and fixed Raman bands, missing bands, defect density and laser frequency dependence of band intensities, widths of overtone bands, Stokes, anti-Stokes anomalies, and other known properties emerge simply and directly. PMID:26799915
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Analytical scatter kernels for portal imaging at 6 MV.
Spies, L; Bortfeld, T
2001-04-01
X-ray photon scatter kernels for 6 MV electronic portal imaging are investigated using an analytical and a semi-analytical model. The models are tested on homogeneous phantoms for a range of uniform circular fields and scatterer-to-detector air gaps relevant for clinical use. It is found that a fully analytical model based on an exact treatment of photons undergoing a single Compton scatter event and an approximate treatment of second and higher order scatter events, assuming a multiple-scatter source at the center of the scatter volume, is accurate within 1% (i.e., the residual scatter signal is less than 1% of the primary signal) for field sizes up to 100 cm2 and air gaps over 30 cm, but shows significant discrepancies for larger field sizes. Monte Carlo results are presented showing that the effective multiple-scatter source is located toward the exit surface of the scatterer, rather than at its center. A second model is therefore investigated where second and higher-order scattering is instead modeled by fitting an analytical function describing a nonstationary isotropic point-scatter source to Monte Carlo generated data. This second model is shown to be accurate to within 1% for air gaps down to 20 cm, for field sizes up to 900 cm2 and phantom thicknesses up to 50 cm. PMID:11339752
Not Available
1992-07-01
The glossary of technical terms was prepared to facilitate the use of the Corrective Action Plan (CAP) issued by OSWER on November 14, 1986. The CAP presents model scopes of work for all phases of a corrective action program, including the RCRA Facility Investigation (RFI), Corrective Measures Study (CMS), Corrective Measures Implementation (CMI), and interim measures. The Corrective Action Glossary includes brief definitions of the technical terms used in the CAP and explains how they are used. In addition, expected ranges (where applicable) are provided. Parameters or terms not discussed in the CAP, but commonly associated with site investigations or remediations are also included.
NASA Astrophysics Data System (ADS)
Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.
2015-05-01
Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional
Quantum Monte Carlo calculations of neutron-alpha scattering.
Nollett, K. M.; Pieper, S. C.; Wiringa, R. B.; Carlson, J.; Hale, G. M.; Physics
2007-07-13
We describe a new method to treat low-energy scattering problems in few-nucleon systems, and we apply it to the five-body case of neutron-alpha scattering. The method allows precise calculations of low-lying resonances and their widths. We find that a good three-nucleon interaction is crucial to obtain an accurate description of neutron-alpha scattering.
Quantum Monte Carlo Calculations of Neutron-{alpha} Scattering
Nollett, Kenneth M.; Pieper, Steven C.; Wiringa, R. B.; Carlson, J.; Hale, G. M.
2007-07-13
We describe a new method to treat low-energy scattering problems in few-nucleon systems, and we apply it to the five-body case of neutron-alpha scattering. The method allows precise calculations of low-lying resonances and their widths. We find that a good three-nucleon interaction is crucial to obtain an accurate description of neutron-alpha scattering.
Stray-light correction in 2D spectroscopy
NASA Astrophysics Data System (ADS)
Schlichenmaier, R.; Franz, M.
2013-07-01
Context. In solar physics, spectropolarimeters based on Fabry-Pérot interferometers are commonly used for high spatial resolution observations. In the data pipeline, corrections for scattered light may be performed on each narrow band image. Aims: We elaborate on the effects of stray-light corrections on Doppler maps. Methods: First, we demonstrate the basic correction effect in a simplified situation of two profiles that suffer from stray light. Then, we study the correction effects on velocity maps by transforming a Hinode SP map into a two-dimensional spectroscopic data set with i(x,y) at each wavelength point, which mimicks narrow band images. Velocity maps are inferred from line profiles of original and stray-light corrected data. Results: The correction of scattered light in narrow band images affects the inferred Doppler velocity maps: relative red shifts always become more red, and relative blue shifts become more blue. This trend is independent of whether downflows have dark or bright intensities. As a result, the effects of overcorrection produce both downflows and upflows. Conclusions: In 2D spectropolarimetry, corrections for scattered light can improve the image intensity and velocity contrast but inherently produce downflow signatures in the penumbra. Hence, such corrections are justified only if the properties of the stray light (seeing, telescope, and instrument) are well known.
Toward improved photon-atom scattering predictions
NASA Astrophysics Data System (ADS)
Kissel, Lynn
1995-05-01
Photon-atom scattering is important in a variety of applications, but scattering from a composite system depends on the accurate characterization of the scattering from an isolated atom or ion. We have been examining the validity of simpler approximations of elastic scattering in the light of second-order S-matrix theory. Partitioning the many-body amplitude into Rayleigh and Delbrück components, processes beyond photoionization contribute. Subtracted cross sections for bound-bound atomic transitions and bound pair annihilation are required in anomalous scattering factors for: (1) convergence of the dispersion integral; (2) agreement with predictions of the more sophisticated S-matrix approach; (3) satisfying the Thomas-Reiche-Kuhn sum rule. New accurate tabulations of anomalous scattering factors have been prepared for all Z, for energies 0-10 000 keV, within the independent particle approximation (IPA) using a Dirac-Slater model of the atom. Separately, experimental atomic photoabsorption threshold information has been used to modify these IPA predictions for improved comparison with experiment.
Toward improved photon-atom scattering predictions
NASA Astrophysics Data System (ADS)
Kissel, Lynn
1994-10-01
Photon-atom scattering is important in a variety of applications, but scattering from a composite system depends on the accurate characterization of the scattering from an isolated atom or ion. We have been examining the validity of simpler approximations of elastic scattering in the light of second-order S-matrix theory. Partitioning the many-body amplitude into Rayleigh and Delbrueck components, processes beyond photoionization contribute. Subtracted cross sections for bound-bound atomic transitions, bound pair annihilation, and bound pair production are required in anomalous scattering factors for: (1) convergence of the dispersion integral; (2) agreement with predictions of the more sophisticated S-matrix approach; (3) satisfying the Thomas-Reiche-Kuhn sum rule. New accurate tabulations of anomalous scattering factors have been prepared for all Z, for energies 0-10,000 keV, within the independent particle approximation (IPA) using a Dirac-Slater model of the atom. Separately, experimental atomic photoabsorption threshold information has been used to modify these IPA predictions for improved comparison with experiment.
NASA Astrophysics Data System (ADS)
Fitzpatrick, A. Liam; Kaplan, Jared
2016-05-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT2 at large central charge c. The Lyapunov exponent λ L , which is a diagnostic for the early onset of chaos, receives 1 /c corrections that may be interpreted as {λ}_L=2π /β(1+12/c) . However, out of time order correlators receive other equally important 1 /c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ L that emerges at large c, focusing on CFT2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Estimating seabed scattering mechanisms via Bayesian model selection.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur. PMID:25324059
Parity violation in electron scattering
NASA Astrophysics Data System (ADS)
Souder, P.; Paschke, K. D.
2016-02-01
By comparing the cross sections for left- and right-handed electrons scattered from various unpolarized nuclear targets, the small parity-violating asymmetry can be measured. These asymmetry data probe a wide variety of important topics, including searches for new fundamental interactions and important features of nuclear structure that cannot be studied with other probes. A special feature of these experiments is that the results are interpreted with remarkably few theoretical uncertainties, which justifies pushing the experiments to the highest possible precision. To measure the small asymmetries accurately, a number of novel experimental techniques have been developed.
Planetary spectra for anisotropic scattering
NASA Technical Reports Server (NTRS)
Chamberlain, J. W.
1975-01-01
Some of the effects on planetary spectra that would be produced by departures from isotropic scattering are examined. The phase function is the simplest departure to handle analytically and the only phase function, other than the isotropic one, that can be incorporated into a Chandrasekhar first approximation. This approach has the advantage of illustrating trends resulting from anisotropies while retaining the simplicity that yields physical insight. An algebraic solution to the two sets of anisotropic H functions is developed in the appendix. It is readily adaptable to progammable desk calculators and gives emergent intensities accurate to 0.3 percent, which is sufficient even for spectroscopic analysis.
Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.
2010-10-12
An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.
Shuttle program: Computing atmospheric scale height for refraction corrections
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Accurate localization of needle entry point in interventional MRI.
Daanen, V; Coste, E; Sergent, G; Godart, F; Vasseur, C; Rousseau, J
2000-10-01
In interventional magnetic resonance imaging (MRI), the systems designed to help the surgeon during biopsy must provide accurate knowledge of the positions of the target and also the entry point of the needle on the skin of the patient. In some cases, this needle entry point can be outside the B(0) homogeneity area, where the distortions may be larger than a few millimeters. In that case, major correction for geometric deformation must be performed. Moreover, the use of markers to highlight the needle entry point is inaccurate. The aim of this study was to establish a three-dimensional coordinate correction according to the position of the entry point of the needle. We also describe a 2-degree of freedom electromechanical device that is used to determine the needle entry point on the patient's skin with a laser spot. PMID:11042649
Li, Z. P.; Hillhouse, G. C.; Meng, J.
2008-07-15
We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Multiple scattering of electromagnetic waves by rain
NASA Technical Reports Server (NTRS)
Tsolakis, A.; Stutzman, W. L.
1982-01-01
As the operating frequencies of communications systems move higher into the millimeter wave region, the effects of multiple scattering in precipitation media become more significant. In this paper, general formulations are presented for single, first-order multiple, and complete multiple scattering. Included specifically are distributions of particle size, shape, and orientation angle, as well as variation in the medium density along the direction of wave propagation. Calculations are performed for rain. It is shown that the effects of higher-order scattering are not noticeable in either attenuation or channel isolation on a dual-polarized system until frequencies of about 30 GHz are reached. The complete multiple-scattering formulation presented gives accurate results at high millimeter wave frequencies as well as including realistic medium parameter distributions. Furthermore, it is numerically efficient.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
NASA Astrophysics Data System (ADS)
Chicea, Dan
2010-05-01
Light scattering on particles having the diameter comparable with the wavelength is accurately described by the Mie theory and the light scattering anisotropy can conveniently be described by the one parameter Henyey Greenstein phase function. An aqueous suspension containing magnetite nanoparticles was the target of a coherent light scattering experiment. By fitting the scattering phase function on the experimental data the scattering anisotropy parameter can be assessed. As the scattering parameter strongly depends of the scatterer size, the average particle diameter was thus estimated and particle aggregates presence was probed. This technique was used to investigate the nanoparticle aggregation dynamics and the results are presented in this work.
Single Scattering Albedo Monitor for Airborne Particulates
NASA Astrophysics Data System (ADS)
Onasch, Timothy; Massoli, Paola; Kebabian, Paul; Hills, Frank; Bacon, Fred; Freedman, Andrew
2015-04-01
We describe a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. With an appropriate change in mirrors and light source, measurements have been made at wavelengths ranging from 450 to 780 nm. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using a integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement. Measurements using ammonium sulfate particles of various sizes indicate that the response of the scattering channel with respect to measured extinction is linear to within 1% up to 1000 Mm-1 and can be extended further (4000 Mm-1) with additional corrections. The precision in both measurement channels is less than 1 Mm-1 (1s, 1σ). The truncation effect in the scattering channel, caused by light lost at extreme forward/backward scattering angles, was measured as a function of particle size using monodisperse polystyrene latex particles (n=1.59). The results were successfully fit using a simple geometric model allowing for reasonable extrapolation to a given wavelength, particle index of refraction and particle size distribution, assuming spherical particles. For sub-micron sized particles, the truncation corrections are comparable to those reported for commercial nephelometers. Measurements of the optical properties of ambient aerosol indicate that the values of the SSA of these particles measured with this instrument (0.91±0.03) using scattering and extinction agreed within experimental uncertainty with those determined using extinction measured by this instrument and absorption measured using a Multi-Angle Absorption Spectrometer (0.89±0.03) where the
78 FR 75449 - Miscellaneous Corrections; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-12
... INFORMATION: The NRC published a final rule in the Federal Register on June 7, 2013 (78 FR 34245), to make.... The final rule contained minor errors in grammar, punctuation, and referencing. This document corrects... specifying metric units. The final rule inadvertently included additional errors in grammar and...
Atmospheric correction for inland waters
NASA Astrophysics Data System (ADS)
Vidot, Jerome; Santer, Richard P.
2004-02-01
Inland waters are an increasingly valuable natural resource with major impacts and benefits for population and environment. As the spatial resolution is improved for "ocean color" satellite sensors, such observations become relevant to monitor water quality for lakes. We first demonstrated that the required atmospheric correction cannot be conducted using the standard algorithms developed for ocean. The ocean color sensors have spectral bands that allow characterization of aerosol over dark land pixels (vegetation in the blue and in the red spectral bands). It is possible to use a representative aerosol model in the atmospheric correction over inland waters after validating the spatial homogeneity of the aerosol model in the lake vicinity. The performance of this new algorithm is illustrated on SeaWiFS scenes of the Balaton (Hungary; the Constance, Germany) lakes. We illustrated the good spatial homogeneity of the aerosols and the meaningfulness of the water leaving radiances derived over these two lakes. We also addressed the specificity of the computation of the Fresnel reflection. The direct to diffuse term of this Fresnel contribution is reduced because of the limited size of the lake. Based on the primary scattering approximation, we propose a simple formulation of this component.
SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction
Siman, W; Kappadath, S
2014-06-01
Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECT images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime
A new SERS: scattering enhanced Raman scattering
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Yakovlev, Vladislav V.
2014-03-01
Raman spectroscopy is a powerful technique that can be used to obtain detailed chemical information about a system without the need for chemical markers. It has been widely used for a variety of applications such as cancer diagnosis and material characterization. However, Raman scattering is a highly inefficient process, where only one in 1011 scattered photons carry the needed information. Several methods have been developed to enhance this inherently weak effect, including surface enhanced Raman scattering and coherent anti-Stokes Raman scattering. These techniques suffer from drawbacks limiting their commercial use, such as the need for spatial localization of target molecules to a `hot spot', or the need for complex laser systems. Here, we present a simple instrument to enhance spontaneous Raman scattering using elastic light scattering. Elastic scattering is used to substantially increase the interaction volume. Provided that the scattering medium exhibits very low absorption in the spectral range of interest, a large enhancement factor can be attained in a simple and inexpensive setting. In our experiments, we demonstrate an enhancement of 107 in Raman signal intensity. The proposed novel device is equally applicable for analyzing solids, liquids, and gases.
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
Dense Plasma X-ray Scattering: Methods and Applications
Glenzer, S H; Lee, H J; Davis, P; Doppner, T; Falcone, R W; Fortmann, C; Hammel, B A; Kritcher, A L; Landen, O L; Lee, R W; Munro, D H; Redmer, R; Weber, S
2009-08-19
We have developed accurate x-ray scattering techniques to measure the physical properties of dense plasmas. Temperature and density are inferred from inelastic x-ray scattering data whose interpretation is model-independent for low to moderately coupled systems. Specifically, the spectral shape of the non-collective Compton scattering spectrum directly reflects the electron velocity distribution. In partially Fermi degenerate systems that have been investigated experimentally in laser shock-compressed beryllium, the Compton scattering spectrum provides the Fermi energy and hence the electron density. We show that forward scattering spectra that observe collective plasmon oscillations yield densities in agreement with Compton scattering. In addition, electron temperatures inferred from the dispersion of the plasmon feature are consistent with the ion temperature sensitive elastic scattering feature. Hence, theoretical models of the static ion-ion structure factor and consequently the equation of state of dense matter can be directly tested.
Grimmer, Rainer; Kachelriess, Marc
2011-04-15
Purpose: Scatter and beam hardening are prominent artifacts in x-ray CT. Currently, there is no precorrection method that inherently accounts for tube voltage modulation and shaped prefiltration. Methods: A method for self-calibration based on binary tomography of homogeneous objects, which was proposed by B. Li et al. [''A novel beam hardening correction method for computed tomography,'' in Proceedings of the IEEE/ICME International Conference on Complex Medical Engineering CME 2007, pp. 891-895, 23-27 May 2007], has been generalized in order to use this information to preprocess scans of other, nonbinary objects, e.g., to reduce artifacts in medical CT applications. Further on, the method was extended to handle scatter besides beam hardening and to allow for detector pixel-specific and ray-specific precorrections. This implies that the empirical binary tomography calibration (EBTC) technique is sensitive to spectral effects as they are induced by the heel effect, by shaped prefiltration, or by scanners with tube voltage modulation. The presented method models the beam hardening correction by using a rational function, while the scatter component is modeled using the pep model of B. Ohnesorge et al. [''Efficient object scatter correction algorithm for third and fourth generation CT scanners,'' Eur. Radiol. 9(3), 563-569 (1999)]. A smoothness constraint is applied to the parameter space to regularize the underdetermined system of nonlinear equations. The parameters determined are then used to precorrect CT scans. Results: EBTC was evaluated using simulated data of a flat panel cone-beam CT scanner with tube voltage modulation and bow-tie prefiltration and using real data of a flat panel cone-beam CT scanner. In simulation studies, where the ground truth is known, the authors' correction model proved to be highly accurate and was able to reduce beam hardening by 97% and scatter by about 75%. Reconstructions of measured data showed significantly less artifacts than
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... 67013, the Presidential Determination number should read ``2010-12'' (Presidential Sig.) [FR Doc. C1... Migration Needs Resulting from Violence in Kyrgyzstan Correction In Presidential document...
NASA Astrophysics Data System (ADS)
Bartkowski, Zygmunt; Bartkowska, Janina
2006-02-01
In the prismatic corrections there are described the differences between the nominal and interior prisms, or tilts of the eye to fix straightforward (Augenausgleichbewegung). In the astigmatic corrections, if the prism doesn't lie in the principal sections of the cylinder, the directions of both events are different. In the corrections of the horizontal strabismus there appears the vertical component of the interior prism. The approximated formulae describing these phenomena are presented. The suitable setting can correct the quality of the vision in the important for the patient direction.
A Review of Target Mass Corrections
I. Schienbein; V. Radescu; G. Zeller; M. E. Christy; C. E. Keppel; K. S. McFarland; W. Melnitchouk; F. I. Olness; M. H. Reno; F. Steffens; J.-Y. Yu
2007-09-06
With recent advances in the precision of inclusive lepton-nuclear scattering experiments, it has become apparent that comparable improvements are needed in the accuracy of the theoretical analysis tools. In particular, when extracting parton distribution functions in the large-x region, it is crucial to correct the data for effects associated with the nonzero mass of the target. We present here a comprehensive review of these target mass corrections (TMC) to structure functions data, summarizing the relevant formulas for TMCs in electromagnetic and weak processes. We include a full analysis of both hadronic and partonic masses, and trace how these effects appear in the operator product expansion and the factorized parton model formalism, as well as their limitations when applied to data in the x -> 1 limit. We evaluate the numerical effects of TMCs on various structure functions, and compare fits to data with and without these corrections.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Compton Scattering Experiments with Polychromatic Radiation
NASA Astrophysics Data System (ADS)
Schütz, Wolfgang; Waldeck, Beate; Flösch, Dietmar; Weyrich, Wolf
1993-02-01
We show an iterative algorithm that allows to obtain accurate Compton profiles J(q) from Compton scattering spectra I2 (ω2), if the excitation radiation is not strictly monochromatic. It requires knowledge of the spectral distribution of the primary radiation I1(ω1), validity of the impulse approximation and dominance of a monochromatic part in I1(ω1) over the polychromatic rest. Conversely, the primary spectrum is often experimentally not directly accessible. In such a situation it is possible to evaluate the primary spectrum I1(ω1) from the spectrum of scattered photons, I2(ω2), with a similar iterative algorithm. We use a scattering target of high atomic number in order to ensure that the elastically scattered photons dominate the inelastically scattered ones. From the scattered spectrum we get a model for the Compton profile that allows us to separate the inelastic part of the scattered spectrum from the elastic part, which, in turn, is proportional to the spectral distribution of the primary radiation.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
ERIC Educational Resources Information Center
di Francia, Giuliano Toraldo
1973-01-01
The art of deriving information about an object from the radiation it scatters was once limited to visible light. Now due to new techniques, much of the modern physical science research utilizes radiation scattering. (DF)
NASA Astrophysics Data System (ADS)
Wu, Li-Li; Zhou, Qihou H.; Chen, Tie-Jun; Liang, J. J.; Wu, Xin
2015-09-01
Simultaneous derivation of multiple ionospheric parameters from the incoherent scatter power spectra in the F1 region is difficult because the spectra have only subtle differences for different combinations of parameters. In this study, we apply a particle swarm optimizer (PSO) to incoherent scatter power spectrum fitting and compare it to the commonly used least squares fitting (LSF) technique. The PSO method is found to outperform the LSF method in practically all scenarios using simulated data. The PSO method offers the advantages of not being sensitive to initial assumptions and allowing physical constraints to be easily built into the model. When simultaneously fitting for molecular ion fraction (fm), ion temperature (Ti), and ratio of ion to electron temperature (γT), γT is largely stable. The uncertainty between fm and Ti can be described as a quadratic relationship. The significance of this result is that Ti can be retroactively corrected for data archived many years ago where the assumption of fm may not be accurate, and the original power spectra are unavailable. In our discussion, we emphasize the fitting for fm, which is a difficult parameter to obtain. PSO method is often successful in obtaining fm, whereas LSF fails. We apply both PSO and LSF to actual observations made by the Arecibo incoherent scatter radar. The results show that PSO method is a viable method to simultaneously determine ion and electron temperatures and molecular ion fraction when the last is greater than 0.3.
Contribution of solvent water to the solution X-ray scattering profile of proteins.
Seki, Yasutaka; Tomizawa, Tadashi; Khechinashvili, Nikolay N; Soda, Kunitsugu
2002-03-28
A theoretical framework is presented to analyze how solvent water contributes to the X-ray scattering profile of protein solution. Molecular dynamics simulations were carried out on pure water and an aqueous solution of myoglobin to determine the spatial distribution of water molecules in each of them. Their solution X-ray scattering (SXS) profiles were numerically evaluated with obtained atomic-coordinate data. It is shown that two kinds of contributions from solvent water must be considered to predict the SXS profile of a solution accurately. One is the excluded solvent scattering originating in exclusion of water molecules from the space occupied by solutes. The other is the hydration effect resulting from formation of a specific distribution of water around solutes. Explicit consideration of only two molecular layers of water is practically enough to incorporate the hydration effect. Care should be given to using an approximation in which an averaged electron density distribution is assumed for the structure factor because it may predict profiles considerably deviating from the correct profile at large K. PMID:12062383
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey
2011-01-01
This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced