Hayes, E.F.; Darakjian, Z. . Dept. of Chemistry); Walker, R.B. )
1990-01-01
The Bending Corrected Rotating Linear Model (BCRLM), developed by Hayes and Walker, is a simple approximation to the true multidimensional scattering problem for reaction of the type: A + BC {yields} AB + C. While the BCRLM method is simpler than methods designed to obtain accurate three dimensional quantum scattering results, this turns out to be a major advantage in terms of our benchmarking studies. The computer code used to obtain BCRLM scattering results is written for the most part in standard FORTRAN and has been reported to several scalar, vector, and parallel architecture computers including the IBM 3090-600J, the Cray XMP and YMP, the Ardent Titan, IBM RISC System/6000, Convex C-1 and the MIPS 2000. Benchmark results will be reported for each of these machines with an emphasis on comparing the scalar, vector, and parallel performance for the standard code with minimum modifications. Detailed analysis of the mapping of the BCRLM approach onto both shared and distributed memory parallel architecture machines indicates the importance of introducing several key changes in the basic strategy and algorithums used to calculate scattering results. This analysis of the BCRLM approach provides some insights into optimal strategies for mapping three dimensional quantum scattering methods, such as the Parker-Pack method, onto shared or distributed memory parallel computers.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
An Accurate, Simplified Model Intrabeam Scattering
Bane, Karl LF
2002-05-23
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Monte Carlo scatter correction for SPECT
NASA Astrophysics Data System (ADS)
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Scatter factor corrections for elongated fields
Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A. )
1989-09-01
Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor.
Scatter factor corrections for elongated fields.
Higgins, P D; Sohn, W H; Sibata, C H; McCarthy, W A
1989-01-01
Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor.
Proximity effect correction concerning forward scattering
NASA Astrophysics Data System (ADS)
Tsunoda, Dai; Shoji, Masahiro; Tsunoe, Hiroyuki
2010-09-01
The Proximity Effect is a critical problem in EB Lithography which is used in Photomask writing. Proximity Effect means that an electron shot by gun scatters by collided with resist molecule or substrate atom causes CD variation depending on pattern density [1]. Scattering by collision with resist molecule is called as "forward scattering", that affects in dozens of nanometer range, and with substrate atom is called as "backward scattering, that affects approximately 10 micrometer in 50keV acceleration voltage respectively. In conventional Proximity Effect Correction (PEC) for mask writing, we don't need to think forward scattering effect. However we should think about forward scattering because of smaller feature size. We have proposed a PEC software product named "PATACON PC-Cluster"[2], which can concern forward scattering and calculate optimum dose modulation. In this communication, we explain the PEC processing throughput when the that takes forward scattering into account. The key technique is to use different processing field size for forward scattering calculation. Additionally, the possibility is shown that effective PEC may be available by connecting forward scattering and backward scattering.
Finite volume corrections to pi pi scattering
Sato, Ikuro; Bedaque, Paulo F.; Walker-Loud, Andre
2006-01-13
Lattice QCD studies of hadron-hadron interactions are performed by computing the energy levels of the system in a finite box. The shifts in energy levels proportional to inverse powers of the volume are related to scattering parameters in a model independent way. In addition, there are non-universal exponentially suppressed corrections that distort this relation. These terms are proportional to e-m{sub pi} L and become relevant as the chiral limit is approached. In this paper we report on a one-loop chiral perturbation theory calculation of the leading exponential corrections in the case of I=2 pi pi scattering near threshold.
Correction of sunspot intensities for scattered light
NASA Technical Reports Server (NTRS)
Mullan, D. J.
1973-01-01
Correction of sunspot intensities for scattered light usually involves fitting theoretical curves to observed aureoles (Zwaan, 1965; Staveland, 1970, 1972). In this paper we examine the inaccuracies in the determination of scattered light by this method. Earlier analyses are extended to examine uncertainties due to the choice of the expression for limb darkening. For the spread function, we consider Lorentzians and Gaussians for which analytic expressions for the aureole can be written down. Lorentzians lead to divergence and normalization difficulties, and should not be used in scattered light determinations. Gaussian functions are more suitable.
Quadratic electroweak corrections for polarized Moller scattering
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Scatter correction in digital mammography based on image deconvolution.
Ducote, J L; Molloi, S
2010-03-07
X-ray scatter is a major cause of nonlinearity in densitometry measurements using digital mammography. Previous scatter correction techniques have primarily used a single scatter point spread function to estimate x-ray scatter. In this study, a new algorithm to correct x-ray scatter based on image convolution was implemented using a spatially variant scatter point spread function which is energy and thickness dependent. The scatter kernel was characterized in terms of its scattering fraction (SF) and scatter radial extent (k) on uniform Lucite phantoms with thickness of 0.8-8.0 cm. The algorithm operates on a pixel-by-pixel basis by grouping pixels of similar thicknesses into a series of mask images that are individually deconvolved using Fourier image analysis with a distinct kernel for each image. The algorithm was evaluated with three Lucite step phantoms and one anthropomorphic breast phantom using a full-field digital mammography system at energies of 24, 28, 31 and 49 kVp. The true primary signal was measured with a multi-hole collimator. The effect on image quality was also evaluated. For all 16 studies, the average mean percentage error in estimating the true primary signal was found to be -2.13% and the average rms percentage error was 2.60%. The image quality was seen to improve at every energy up to 25% at 49 kVp. The results indicate that a technique based on a spatially variant scatter point spread function can accurately estimate x-ray scatter.
QCD Corrections in Transversely Polarized Scattering
Vogelsang,W.
2008-10-06
We discuss two recent calculations of higher-order QeD corrections to scattering of transversely polarized hadrons. A basic concept underlying much of the theoretical description of high-energy hadronic scattering is the factorization theorem, which states that large momentum-transfer reactions may be factorized into long-distance pieces that contain information on the structure of the nucleon in terms of its parton densities, and parts that are short-distance and describe the hard interactions of the partons. Two crucial points are that on the one hand the long-distance contributions are universal, i.e., they are the same in any inelastic reaction under consideration, and that on the other hand the short-distance pieces depend only on the large scales related to the large momentum transfer in the overall reaction and, therefore, may be evaluated using QCD perturbation theory. The lowest order for the latter can generally only serve to give a rough description of the reaction under study. It merely captures the main features, but does not usually provide a quantitative understanding. The first-order ('next-to-leading order' (NLO)) corrections are generally indispensable in order to arrive at a firmer theoretical prediction for hadronic cross sections, and in some cases even an all-order resummation of large perturbative corrections is needed. In the present paper we win discuss two calculations [1, 2] of higher-order QeD corrections to transversely polarized scattering.
Atmospheric scattering corrections to solar radiometry
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.
Scatter correction for cone-beam CT in radiation therapy
Zhu, Lei; Xie, Yaoqin; Wang, Jing; Xing, Lei
2009-01-01
Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy for patient setup and adaptive replanning. However, due to the large volume of x-ray illumination, scatter becomes a rather serious problem and is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in literature, while a standard practical solution still remains elusive. In radiation therapy, the same patient is scanned repetitively during a course of treatment, a natural question to ask is whether one can obtain the scatter distribution on the first day of treatment and then use the data for scatter correction in the subsequent scans on different days. To realize this scatter removal scheme, two technical pieces must be in place: (i) A strategy to obtain the scatter distribution in on-board CBCT imaging and (ii) a method to spatially match a prior scatter distribution with the on-treatment CBCT projection data for scatter subtraction. In this work, simple solutions to the two problems are provided. A partially blocked CBCT is used to extract the scatter distribution. The x-ray beam blocker has a strip pattern, such that partial volume can still be accurately reconstructed and the whole-field scatter distribution can be estimated from the detected signals in the shadow regions using interpolation∕extrapolation. In the subsequent scans, the patient transformation is determined using a rigid registration of the conventional CBCT and the prior partial CBCT. From the derived patient transformation, the measured scatter is then modified to adapt the new on-treatment patient geometry for scatter correction. The proposed method is evaluated using physical experiments on a clinical CBCT system. On the Catphan©600 phantom, the errors in Hounsfield unit (HU) in the selected regions of interest are reduced from about 350 to below 50 HU; on an anthropomorphic phantom, the error is reduced from 15.7% to 5.4%. The proposed
Scatter correction for cone-beam CT in radiation therapy
Zhu Lei; Xie Yaoqin; Wang Jing; Xing Lei
2009-06-15
Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy for patient setup and adaptive replanning. However, due to the large volume of x-ray illumination, scatter becomes a rather serious problem and is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in literature, while a standard practical solution still remains elusive. In radiation therapy, the same patient is scanned repetitively during a course of treatment, a natural question to ask is whether one can obtain the scatter distribution on the first day of treatment and then use the data for scatter correction in the subsequent scans on different days. To realize this scatter removal scheme, two technical pieces must be in place: (i) A strategy to obtain the scatter distribution in on-board CBCT imaging and (ii) a method to spatially match a prior scatter distribution with the on-treatment CBCT projection data for scatter subtraction. In this work, simple solutions to the two problems are provided. A partially blocked CBCT is used to extract the scatter distribution. The x-ray beam blocker has a strip pattern, such that partial volume can still be accurately reconstructed and the whole-field scatter distribution can be estimated from the detected signals in the shadow regions using interpolation/extrapolation. In the subsequent scans, the patient transformation is determined using a rigid registration of the conventional CBCT and the prior partial CBCT. From the derived patient transformation, the measured scatter is then modified to adapt the new on-treatment patient geometry for scatter correction. The proposed method is evaluated using physical experiments on a clinical CBCT system. On the Catphan(c)600 phantom, the errors in Hounsfield unit (HU) in the selected regions of interest are reduced from about 350 to below 50 HU; on an anthropomorphic phantom, the error is reduced from 15.7% to 5.4%. The proposed method
Noise suppression in scatter correction for cone-beam CT
Zhu, Lei; Wang, Jing; Xing, Lei
2009-01-01
Scatter correction is crucial to the quality of reconstructed images in x-ray cone-beam computed tomography (CBCT). Most of existing scatter correction methods assume smooth scatter distributions. The high-frequency scatter noise remains in the projection images even after a perfect scatter correction. In this paper, using a clinical CBCT system and a measurement-based scatter correction, the authors show that a scatter correction alone does not provide satisfactory image quality and the loss of the contrast-to-noise ratio (CNR) of the scatter corrected image may overwrite the benefit of scatter removal. To circumvent the problem and truly gain from scatter correction, an effective scatter noise suppression method must be in place. They analyze the noise properties in the projections after scatter correction and propose to use a penalized weighted least-squares (PWLS) algorithm to reduce the noise in the reconstructed images. Experimental results on an evaluation phantom (Catphan©600) show that the proposed algorithm further reduces the reconstruction error in a scatter corrected image from 10.6% to 1.7% and increases the CNR by a factor of 3.6. Significant image quality improvement is also shown in the results on an anthropomorphic phantom, in which the global noise level is reduced and the local streaking artifacts around bones are suppressed. PMID:19378735
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
Does the Taylor Spatial Frame Accurately Correct Tibial Deformities?
Segal, Kira; Ilizarov, Svetlana; Fragomen, Austin T.; Ilizarov, Gabriel
2009-01-01
Background Optimal leg alignment is the goal of tibial osteotomy. The Taylor Spatial Frame (TSF) and the Ilizarov method enable gradual realignment of angulation and translation in the coronal, sagittal, and axial planes, therefore, the term six-axis correction. Questions/purposes We asked whether this approach would allow precise correction of tibial deformities. Methods We retrospectively reviewed 102 patients (122 tibiae) with tibial deformities treated with percutaneous osteotomy and gradual correction with the TSF. The proximal osteotomy group was subdivided into two subgroups to distinguish those with an intentional overcorrection of the mechanical axis deviation (MAD). The minimum followup after frame removal was 10 months (average, 48 months; range, 10–98 months). Results In the proximal osteotomy group, patients with varus and valgus deformities for whom the goal of alignment was neutral or overcorrection experienced accurate correction of MAD. In the proximal tibia, the medial proximal tibial angle improved from 80° to 89° in patients with a varus deformity and from 96° to 85° in patients with a valgus deformity. In the middle osteotomy group, all patients had less than 5° coronal plane deformity and 15 of 17 patients had less that 5° sagittal plane deformity. In the distal osteotomy group, the lateral distal tibial angle improved from 77° to 86° in patients with a valgus deformity and from 101° to 90° for patients with a varus deformity. Conclusions Gradual correction of all tibial deformities with the TSF was accurate and with few complications. Level of Evidence Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:19911244
Accurate documentation, correct coding, and compliance: it's your best defense!
Coles, T S; Babb, E F
1999-07-01
This article focuses on the need for physicians to maintain an awareness of regulatory policy and the law impacting the federal government's medical insurance programs, and to internalize and apply this knowledge in their practices. Basic information concerning selected fraud and abuse statutes and the civil monetary penalties and sanctions for noncompliance is discussed. The application of accurate documentation and correct coding principles, as well as the rationale for implementating an effective compliance plan in order to prevent fraud and abuse and/or minimize disciplinary action from government regulatory agencies, are emphasized.
Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging
Watson, C.C.; Newport, D.; Casey, M.E.; Kemp, R.A. de; Beanlands, R.S.; Schmand, M. |
1997-02-01
Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [{sup 18}F] fluorodeoxyglucose (FDG) studies of the heart with the patient`s arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. In this paper they describe this algorithm and present scatter correction results from human and chest phantom studies.
NASA Astrophysics Data System (ADS)
Sei, Alain
2016-10-01
The state of the art of atmospheric correction for moderate resolution and high resolution sensors is based on assuming that the surface reflectance at the bottom of the atmosphere is uniform. This assumption accounts for multiple scattering but ignores the contribution of neighboring pixels, that is it ignores adjacency effects. Its great advantage however is to substantially reduce the computational cost of performing atmospheric correction and make the problem computationally tractable. In a recent paper, (Sei, 2015) a computationally efficient method was introduced for the correction of adjacency effects through the use of fast FFT-based evaluations of singular integrals and the use of analytic continuation. It was shown that divergent Neumann series can be avoided and accurate results be obtained for clear and turbid atmospheres. We analyze in this paper the error of the standard state of the art Lambertian atmospheric correction method on Landsat imagery and compare it to our newly introduced method. We show that for high contrast scenes the state of the art atmospheric correction yields much larger errors than our method.
Using BRDFs for accurate albedo calculations and adjacency effect corrections
Borel, C.C.; Gerstl, S.A.W.
1996-09-01
In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.
Scattering error corrections for in situ absorption and attenuation measurements.
McKee, David; Piskozub, Jacek; Brown, Ian
2008-11-24
Monte Carlo simulations are used to establish a weighting function that describes the collection of angular scattering for the WETLabs AC-9 reflecting tube absorption meter. The equivalent weighting function for the AC-9 attenuation sensor is found to be well approximated by a binary step function with photons scattered between zero and the collection half-width angle contributing to the scattering error and photons scattered at larger angles making zero contribution. A new scattering error correction procedure is developed that accounts for scattering collection artifacts in both absorption and attenuation measurements. The new correction method does not assume zero absorption in the near infrared (NIR), does not assume a wavelength independent scattering phase function, but does require simultaneous measurements of spectrally matched particulate backscattering. The new method is based on an iterative approach that assumes that the scattering phase function can be adequately modeled from estimates of particulate backscattering ratio and Fournier-Forand phase functions. It is applied to sets of in situ data representative of clear ocean water, moderately turbid coastal water and highly turbid coastal water. Initial results suggest significantly higher levels of attenuation and absorption than those obtained using previously published scattering error correction procedures. Scattering signals from each correction procedure have similar magnitudes but significant differences in spectral distribution are observed.
Accurate source location from waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei
2016-06-01
Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.
Study of multispectral convolution scatter correction in high resolution PET
Yao, R.; Lecomte, R.; Bentourkia, M.
1996-12-31
PET images acquired with a high resolution scanner based on arrays of small discrete detectors are obtained at the cost of low sensitivity and increased detector scatter. It has been postulated that these limitations can be overcome by using enlarged discrimination windows to include more low energy events and by developing more efficient energy-dependent methods to correct for scatter. In this work, we investigate one such method based on the frame-by-frame scatter correction of multispectral data. Images acquired in the conventional, broad and multispectral window modes were processed by the stationary and nonstationary consecutive convolution scatter correction methods. Broad and multispectral window acquisition with a low energy threshold of 129 keV improved system sensitivity by up to 75% relative to conventional window with a {approximately}350 keV threshold. The degradation of image quality due to the added scatter events can almost be fully recovered by the subtraction-restoration scatter correction. The multispectral method was found to be more sensitive to the nonstationarity of scatter and its performance was not as good as that of the broad window. It is concluded that new scatter degradation models and correction methods need to be established to fully take advantage of multispectral data.
Review and current status of SPECT scatter correction
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Buvat, Irène; Beekman, Freek J.
2011-07-01
Detection of scattered gamma quanta degrades image contrast and quantitative accuracy of single-photon emission computed tomography (SPECT) imaging. This paper reviews methods to characterize and model scatter in SPECT and correct for its image degrading effects, both for clinical and small animal SPECT. Traditionally scatter correction methods were limited in accuracy, noise properties and/or generality and were not very widely applied. For small animal SPECT, these approximate methods of correction are often sufficient since the fraction of detected scattered photons is small. This contrasts with patient imaging where better accuracy can lead to significant improvement of image quality. As a result, over the last two decades, several new and improved scatter correction methods have been developed, although often at the cost of increased complexity and computation time. In concert with (i) the increasing number of energy windows on modern SPECT systems and (ii) excellent attenuation maps provided in SPECT/CT, some of these methods give new opportunities to remove degrading effects of scatter in both standard and complex situations and therefore are a gateway to highly quantitative single- and multi-tracer molecular imaging with improved noise properties. Widespread implementation of such scatter correction methods, however, still requires significant effort.
Solving outside-axial-field-of-view scatter correction problem in PET via digital experimentation
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Zhu, Yang-Ming; Ye, Jinghan; Song, Xiyun; Hu, Zhiqiang
2016-03-01
Unaccounted scatter impact from unknown outside-axial-field-of-view (outside-AFOV) activity in PET is an important degrading factor for image quality and quantitation. Resource consuming and unpopular way to account for the outside- AFOV activity is to perform an additional PET/CT scan of adjacent regions. In this work we investigate a solution to the outside-AFOV scatter problem without performing a PET/CT scan of the adjacent regions. The main motivation for the proposed method is that the measured random corrected prompt (RCP) sinogram in the background region surrounding the measured object contains only scattered events, originating from both inside- and outside-AFOV activity. In this method, the scatter correction simulation searches through many randomly-chosen outside-AFOV activity estimates along with known inside-AFOV activity, generating a plethora of scatter distribution sinograms. This digital experimentation iterates until a decent match is found between a simulated scatter sinogram (that include supposed outside-AFOV activity) and the measured RCP sinogram in the background region. The combined scatter impact from inside- and outside-AFOV activity can then be used for scatter correction during final image reconstruction phase. Preliminary results using measured phantom data indicate successful phantom length estimate with the method, and, therefore, accurate outside-AFOV scatter estimate.
Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna
2010-01-01
With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420
Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.
Gordon, H R; Castaño, D J
1987-06-01
An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined
Some physical factors influencing the accuracy of convolution scatter correction in SPECT.
Msaki, P; Axelsson, B; Larsson, S A
1989-03-01
observed reduction of SF close to the phantom surface indicates that scatter correction of such distributions has to rely on two distinct filter functions. Corrections based on a surface function produce accurate results in the superficial region, while the central distributions are substantially overestimated. Surface radioactive distributions introduce appreciable errors in the determination of central distributions when corrections are based on central filter function. This function introduces a reduction of about 40% in the measured surface concentration.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Dynamical correction of control laws for marine ships' accurate steering
NASA Astrophysics Data System (ADS)
Veremey, Evgeny I.
2014-06-01
The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic
Etch modeling for accurate full-chip process proximity correction
NASA Astrophysics Data System (ADS)
Beale, Daniel F.; Shiely, James P.
2005-05-01
The challenges of the 65 nm node and beyond require new formulations of the compact convolution models used in OPC. In addition to simulating more optical and resist effects, these models must accommodate pattern distortions due to etch which can no longer be treated as small perturbations on photo-lithographic effects. (Methods for combining optical and process modules while optimizing the speed/accuracy tradeoff were described in "Advanced Model Formulations for Optical and Process Proximity Correction", D. Beale et al, SPIE 2004.) In this paper, we evaluate new physics-based etch model formulations that differ from the convolution-based process models used previously. The new models are expressed within the compact modeling framework described by J. Stirniman et al. in SPIE, vol. 3051, p469, 1997, and thus can be used for high-speed process simulation during full-chip OPC.
Scattering correction through a space-variant blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Benno, Koberstein-Schwarz; Lars, Omlor; Tobias, Schmitt-Manderbach; Timo, Mappes; Vasilis, Ntziachristos
2016-09-01
Scattering within biological samples limits the imaging depth and the resolution in microscopy. We present a prior and regularization approach for blind deconvolution algorithms to correct the influence of scattering to increase the imaging depth and resolution. The effect of the prior is demonstrated on a three-dimensional image stack of a zebrafish embryo captured with a selective plane illumination microscope. Blind deconvolution algorithms model the recorded image as a convolution between the distribution of fluorophores and a point spread function (PSF). Our prior uses image information from adjacent z-planes to estimate the unknown blur in tissue. The increased size of the PSF due to the cascading effect of scattering in deeper tissue is accounted for by a depth adaptive regularizer model. In a zebrafish sample, we were able to extend the point in depth, where scattering has a significant effect on the image quality by around 30 μm.
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel; Demorest, Paul; Jones, Glenn E-mail: maura.mclaughlin@mail.wvu.edu E-mail: pdemores@nrao.edu
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.
Radiative corrections to polarization observables in electron-proton scattering
NASA Astrophysics Data System (ADS)
Borisyuk, Dmitry; Kobushkin, Alexander
2014-08-01
We consider radiative corrections to polarization observables in elastic electron-proton scattering, in particular, for the polarization transfer measurements of the proton form factor ratio μGE/GM. The corrections are of two types: two-photon exchange (TPE) and bremsstrahlung (BS); in the present work we pay special attention to the latter. Assuming small missing energy or missing mass cutoff, the correction can be represented in a model-independent form, with both electron and proton radiation taken into account. Numerical calculations show that the contribution of the proton radiation is not negligible. Overall, at high Q2 and energies, the total correction to μGE/GM grows, but is dominated by TPE. At low energies both TPE and BS may be significant; the latter amounts to ˜0.01 for some reasonable cut-off choices.
Laurette, I; Zeng, G L; Welch, A; Christian, P E; Gullberg, G T
2000-11-01
The qualitative and quantitative accuracy of SPECT images is degraded by physical factors of attenuation, Compton scatter and spatially varying collimator geometric response. This paper presents a 3D ray-tracing technique for modelling attenuation, scatter and geometric response for SPECT imaging in an inhomogeneous attenuating medium. The model is incorporated into a three-dimensional projector-backprojector and used with the maximum-likelihood expectation-maximization algorithm for reconstruction of parallel-beam data. A transmission map is used to define the inhomogeneous attenuating and scattering object being imaged. The attenuation map defines the probability of photon attenuation between the source and the scattering site, the scattering angle at the scattering site and the probability of attenuation of the scattered photon between the scattering site and the detector. The probability of a photon being scattered through a given angle and being detected in the emission energy window is approximated using a Gaussian function. The parameters of this Gaussian function are determined using physical measurements of parallel-beam scatter line spread functions from a non-uniformly attenuating phantom. The 3D ray-tracing scatter projector-backprojector produces the scatter and primary components. Then, a 3D ray-tracing projector-backprojector is used to model the geometric response of the collimator. From Monte Carlo and physical phantom experiments, it is shown that the best results are obtained by simultaneously correcting attenuation, scatter and geometric response, compared with results obtained with only one or two of the three corrections. It is also shown that a 3D scatter model is more accurate than a 2D model. A transmission map is useful for obtaining measurements of attenuation and scatter in SPECT data, which can be used together with a model of the geometric response of the collimator to obtain corrected images with quantitative and diagnostically
NLO QCD corrections to graviton induced deep inelastic scattering
NASA Astrophysics Data System (ADS)
Stirling, W. J.; Vryonidou, E.
2011-06-01
We consider Next-to-Leading-Order QCD corrections to ADD graviton exchange relevant for Deep Inelastic Scattering experiments. We calculate the relevant NLO structure functions by calculating the virtual and real corrections for a set of graviton interaction diagrams, demonstrating the expected cancellation of the UV and IR divergences. We compare the NLO and LO results at the centre-of-mass energy relevant to HERA experiments as well as for the proposed higher energy lepton-proton collider, LHeC, which has a higher fundamental scale reach.
Kokhanovsky, Alexander A
2007-04-01
Analytical equations for the diffused scattered light correction factor of Sun photometers are derived and analyzed. It is shown that corrections are weakly dependent on the atmospheric optical thickness. They are influenced mostly by the size of aerosol particles encountered by sunlight on its way to a Sun photometer. In addition, the accuracy of the small-angle approximation used in the work is studied with numerical calculations based on the exact radiative transfer equation.
Bootsma, G. J.; Verhaegen, F.; Jaffray, D. A.
2015-01-15
suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.
A model-based scatter artifacts correction for cone beam CT
Zhao, Wei; Vernekohl, Don; Zhu, Jun; Wang, Luyao; Xing, Lei
2016-01-01
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain or projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection
Correction of optical absorption and scattering variations in laser speckle rheology measurements
Hajjarian, Zeinab; Nadkarni, Seemantini K.
2014-01-01
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications. PMID:24663983
Correction of optical absorption and scattering variations in Laser Speckle Rheology measurements.
Hajjarian, Zeinab; Nadkarni, Seemantini K
2014-03-24
Laser Speckle Rheology (LSR) is an optical technique to evaluate the viscoelastic properties by analyzing the temporal fluctuations of backscattered speckle patterns. Variations of optical absorption and reduced scattering coefficients further modulate speckle fluctuations, posing a critical challenge for quantitative evaluation of viscoelasticity. We compare and contrast two different approaches applicable for correcting and isolating the collective influence of absorption and scattering, to accurately measure mechanical properties. Our results indicate that the numerical approach of Monte-Carlo ray tracing (MCRT) reliably compensates for any arbitrary optical variations. When scattering dominates absorption, yet absorption is non-negligible, diffusing wave spectroscopy (DWS) formalisms perform similar to MCRT, superseding other analytical compensation approaches such as Telegrapher equation. The computational convenience of DWS greatly simplifies the extraction of viscoelastic properties from LSR measurements in a number of chemical, industrial, and biomedical applications.
Electroweak radiative corrections to polarized Mo/ller scattering asymmetries
NASA Astrophysics Data System (ADS)
Czarnecki, Andrzej; Marciano, William J.
1996-02-01
One loop electroweak radiative corrections to left-right parity-violating Mo/ller scattering (e-e--->e-e-) asymmetries are presented. They reduce the standard model (tree level) prediction by 40+/-3% where the main shift and uncertainty stem from hadronic vacuum polarization loops. A similar reduction also occurs for the electron-electron atomic parity-violating interaction. That effect can be attributed to an increase of sin2θW(q2) by 3% in running from q2=m2Z to 0. The sensitivity of the asymmetry to ``new physics'' is also discussed.
Correction for patient table-induced scattered radiation in cone-beam computed tomography (CBCT)
Sun Mingshan; Nagy, Tamas; Virshup, Gary; Partain, Larry; Oelhafen, Markus; Star-Lack, Josh
2011-04-15
Purpose: In image-guided radiotherapy, an artifact typically seen in axial slices of x-ray cone-beam computed tomography (CBCT) reconstructions is a dark region or ''black hole'' situated below the scan isocenter. The authors trace the cause of the artifact to scattered radiation produced by radiotherapy patient tabletops and show it is linked to the use of the offset-detector acquisition mode to enlarge the imaging field-of-view. The authors present a hybrid scatter kernel superposition (SKS) algorithm to correct for scatter from both the object-of-interest and the tabletop. Methods: Monte Carlo simulations and phantom experiments were first performed to identify the source of the black hole artifact. For correction, a SKS algorithm was developed that uses separate kernels to estimate scatter from the patient tabletop and the object-of-interest. Each projection is divided into two regions, one defined by the shadow cast by the tabletop on the imager and one defined by the unshadowed region. The region not shadowed by the tabletop is processed using the recently developed fast adaptive scatter kernel superposition (fASKS) method which employs asymmetric kernels that best model scatter transport through bodylike objects. The shadowed region is convolved with a combination of slab-derived symmetric SKS kernels and asymmetric fASKS kernels. The composition of the hybrid kernels is projection-angle-dependent. To test the algorithm, pelvis phantom and in vivo data were acquired using a CBCT test stand, a Varian Acuity simulator, and a Varian On-Board Imager, all of which have similar geometries and components. Artifact intensities and Hounsfield unit (HU) accuracies in the reconstructions were assessed before and after the correction. Results: The hybrid kernel algorithm provided effective correction and produced substantially better scatter estimates than the symmetric SKS or asymmetric fASKS methods alone. HU nonuniformities in the reconstructed pelvis phantom were
Reitz, Irmtraud; Hesse, Bernd-Michael; Nill, Simeon; Tücking, Thomas; Oelfke, Uwe
2009-01-01
The problem of the enormous amount of scattered radiation in kV CBCT (kilo voltage cone beam computer tomography) is addressed. Scatter causes undesirable streak- and cup-artifacts and results in a quantitative inaccuracy of reconstructed CT numbers, so that an accurate dose calculation might be impossible. Image contrast is also significantly reduced. Therefore we checked whether an appropriate implementation of the fast iterative scatter correction algorithm we have developed for MV (mega voltage) CBCT reduces the scatter contribution in a kV CBCT as well. This scatter correction method is based on a superposition of pre-calculated Monte Carlo generated pencil beam scatter kernels. The algorithm requires only a system calibration by measuring homogeneous slab phantoms with known water-equivalent thicknesses. In this study we compare scatter corrected CBCT images of several phantoms to the fan beam CT images acquired with a reduced cone angle (a slice-thickness of 14 mm in the isocenter) at the same system. Additional measurements at a different CBCT system were made (different energy spectrum and phantom-to-detector distance) and a first order approach of a fast beam hardening correction will be introduced. The observed image quality of the scatter corrected CBCT images is comparable concerning resolution, noise and contrast-to-noise ratio to the images acquired in fan beam geometry. Compared to the CBCT without any corrections the contrast of the contrast-and-resolution phantom with scatter correction and additional beam hardening correction is improved by a factor of about 1.5. The reconstructed attenuation coefficients and the CT numbers of the scatter corrected CBCT images are close to the values of the images acquired in fan beam geometry for the most pronounced tissue types. Only for extreme dense tissue types like cortical bone we see a difference in CT numbers of 5.2%, which can be improved to 4.4% with the additional beam hardening correction. Cupping is
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
NASA Astrophysics Data System (ADS)
Egel, Amos; Gomard, Guillaume; Kettlitz, Siegfried W.; Lemmer, Uli
2017-02-01
We present a numerical strategy for the accurate simulation of light extraction from organic light emitting diodes (OLEDs) comprising an internal nano-particle based scattering layer. On the one hand, the light emission and propagation through the OLED thin film system (including the scattering layer) is treated by means of rigorous wave optics calculations using the T-matrix formalism. On the other hand, the propagation through the substrate is modeled in a ray optics approach. The results from the wave optics calculations enter in terms of the initial substrate radiation pattern and the bidirectional reflectivity distribution of the OLED stack with scattering layer. In order to correct for the truncation error due to a finite number of particles in the simulations, we extrapolate the results to infinitely extended scattering layers. As an application example, we estimate the optimal particle filling fraction for an internal scattering layer in a realistic OLED geometry. The presented treatment is designed to emerge from electromagnetic theory with as few additional assumptions as possible. It could thus serve as a baseline to validate faster but approximate simulation approaches.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-01
We perform updated and detailed calculations of the complete next-to-leading order set of electroweak radiative corrections to parity-violating e{sup -}e{sup -}{yields}e{sup -}e{sup -}({gamma}) scattering asymmetries at energies relevant for the ultraprecise Moeller experiment to be performed at JLab. Our numerical results are presented for a range of experimental cuts and the relative importance of various contributions is analyzed. We also provide very compact expressions analytically free from nonphysical parameters and show them to be valid for fast, yet accurate estimations.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
Electroweak radiative corrections to polarized Mo/ller scattering asymmetries
Czarnecki, A.; Marciano, W.J. |
1996-02-01
One loop electroweak radiative corrections to left-right parity-violating Mo/ller scattering ({ital e}{sup {minus}}{ital e}{sup {minus}}{r_arrow}{ital e}{sup {minus}}{ital e}{sup {minus}}) asymmetries are presented. They reduce the standard model (tree level) prediction by 40{plus_minus}3{percent} where the main shift and uncertainty stem from hadronic vacuum polarization loops. A similar reduction also occurs for the electron-electron atomic parity-violating interaction. That effect can be attributed to an increase of sin{sup 2}{theta}{sub {ital W}}({ital q}{sup 2}) by 3{percent} in running from {ital q}{sup 2}={ital m}{sub {ital Z}}{sup 2} to 0. The sensitivity of the asymmetry to {open_quote}{open_quote}new physics{close_quote}{close_quote} is also discussed. {copyright} {ital 1996 The American Physical Society.}
NASA Astrophysics Data System (ADS)
Robinson, Andrew P.; Tipping, Jill; Cullen, David M.; Hamilton, David
2016-07-01
Accurate activity quantification is the foundation for all methods of radiation dosimetry for molecular radiotherapy (MRT). The requirements for patient-specific dosimetry using single photon emission computed tomography (SPECT) are challenging, particularly with respect to scatter correction. In this paper data from phantom studies, combined with results from a fully validated Monte Carlo (MC) SPECT camera simulation, are used to investigate the influence of the triple energy window (TEW) scatter correction on SPECT activity quantification for {{}1 7 7} Lu MRT. Results from phantom data show that; (1) activity quantification for the total counts in the SPECT field-of-view demonstrates a significant overestimation in total activity recovery when TEW scatter correction is applied at low activities (≤slant 200 MBq). (2) Applying the TEW scatter correction to activity quantification within a volume-of-interest with no background activity provides minimal benefit. (3) In the case of activity distributions with background activity, an overestimation of recovered activity of up to 30% is observed when using the TEW scatter correction. Data from MC simulation were used to perform a full analysis of the composition of events in a clinically reconstructed volume of interest. This allowed, for the first time, the separation of the relative contributions of partial volume effects (PVE) and inaccuracies in TEW scatter compensation to the observed overestimation of activity recovery. It is shown, that even with perfect partial volume compensation, TEW scatter correction can overestimate activity recovery by up to 11%. MC data is used to demonstrate that even a localized and optimized isotope-specific TEW correction cannot reflect a patient specific activity distribution without prior knowledge of the complete activity distribution. This highlights the important role of MC simulation in SPECT activity quantification.
Single-scan scatter correction for cone-beam CT using a stationary beam blocker: a preliminary study
NASA Astrophysics Data System (ADS)
Niu, Tianye; Zhu, Lei
2011-03-01
The performance of cone-beam CT (CBCT) is greatly limited by scatter artifacts. The existing measurement-based methods have promising advantages as a standard scatter correction solution, except that they currently require multiple scans or moving the beam blocker during data acquisition to compensate for the missing primary data. These approaches are therefore unpractical in clinical applications. In this work, we propose a new measurement-based scatter correction method to achieve accurate reconstruction with one single scan and a stationary beam blocker, two seemingly incompatible features which enable simple and effective scatter correction without increase of scan time or patient dose. Based on CT reconstruction theory, we distribute the blocked areas over one projection where primary signals are considered to be redundant in a full scan. The CT image quality is not degraded even with primary loss. Scatter is accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction. In a Monte Carlo simulation study, we first optimize the beam blocker geometry using projections on the Shepp-Logan phantom and then carry out a complete simulation of a CBCT scan on a water phantom. With the scatter-to-primary ratio around 1.0, our method reduces the CT number error from 293 to 2.9 Hounsfield unit (HU) around the phantom center. The proposed approach is further evaluated on a CBCT tabletop system. On the Catphan©600 phantom, the reconstruction error is reduced from 202 to 10 HU in the selected region of interest after the proposed correction.
Electroweak radiative corrections to neutrino-nucleon scattering
NASA Astrophysics Data System (ADS)
Park, Kwangwoo
The main subject of this thesis is to study the impact of electroweak O (alpha) corrections on neutrino-nucleon scattering processes, in particular on the extraction of electroweak parameters at the NuTeV experiment. The Standard Model (SM) represents the best current understanding of electroweak and strong interactions of elementary particles. In recent years it has been impressively confirmed experimentally through the precise determination of W and Z boson properties at the CERN LEP and the Stanford Linear e+e - colliders, and the discovery of the top quark at the Fermilab Tevatron pp collider. The W boson mass (MW) is one of the fundamental parameters in electroweak theory. A precise measurement of MW does not only provide a further precisely known SM input parameter, but significantly improves the indirect limit on the Higgs-boson mass obtained by comparing SM predictions with electroweak precision data. MW is measured directly at the CERN LEP2 e+e- and the Fermilab Tevatron pp colliders. A measurement of MW can also be extracted from a measurement of the sine squared of the weak mixing angle, i.e. sin 2 thetaW, via the well-known relation between the W and Z boson mass, M2W=M2Z (1 - sin2 thetaW). The NuTeV collaboration [20] extracts sin2 theta W, and thus MW, from the ratio of neutral and charged-current neutrino and anti-neutrino cross sections. Their result differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3sigma. Much effort both experimental and theoretical has gone into understanding this discrepancy. These efforts include QCD corrections, parton distribution functions, and nuclear structure [21]. However, the effect of electroweak radiative corrections has not been fully studied yet. In the extraction of MW from NuTeV data, only part of the electroweak corrections have been included [20]. Although the complete calculation of these corrections is available in [17] and [18], their impact on the NuTeV measurement of MW
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Semenov, Alexander; Babikov, Dmitri
2014-01-16
For computational treatment of rotationally inelastic scattering of molecules, we propose to use the mixed quantum/classical theory, MQCT. The old idea of treating translational motion classically, while quantum mechanics is used for rotational degrees of freedom, is developed to the new level and is applied to Na + N2 collisions in a broad range of energies. Comparison with full-quantum calculations shows that MQCT accurately reproduces all, even minor, features of energy dependence of cross sections, except scattering resonances at very low energies. The remarkable success of MQCT opens up wide opportunities for computational predictions of inelastic scattering cross sections at higher temperatures and/or for polyatomic molecules and heavier quenchers, which is computationally close to impossible within the full-quantum framework.
Accurate self-correction of errors in long reads using de Bruijn graphs
Walve, Riku; Rivals, Eric; Ukkonen, Esko
2017-01-01
Abstract Motivation: New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore, can produce sequencing reads up to 50 000 bp long but with an error rate of at least 15%. Reducing the error rate is necessary for subsequent utilization of the reads in, e.g. de novo genome assembly. The error correction problem has been tackled either by aligning the long reads against each other or by a hybrid approach that uses the more accurate short reads produced by second generation sequencing technologies to correct the long reads. Results: We present an error correction method that uses long reads only. The method consists of two phases: first, we use an iterative alignment-free correction method based on de Bruijn graphs with increasing length of k-mers, and second, the corrected reads are further polished using long-distance dependencies that are found using multiple alignments. According to our experiments, the proposed method is the most accurate one relying on long reads only for read sets with high coverage. Furthermore, when the coverage of the read set is at least 75×, the throughput of the new method is at least 20% higher. Availability and Implementation: LoRMA is freely available at http://www.cs.helsinki.fi/u/lmsalmel/LoRMA/. Contact: leena.salmela@cs.helsinki.fi PMID:27273673
Scatter correction for full-fan volumetric CT using a stationary beam blocker in a single full scan
Niu, Tianye; Zhu, Lei
2011-01-01
Purpose: Applications of volumetric CT (VCT) are hampered by shading and streaking artifacts in the reconstructed images. These artifacts are mainly due to strong x-ray scatter signals accompanied with the large illumination area within one projection, which lead to CT number inaccuracy, image contrast loss and spatial nonuniformity. Although different scatter correction algorithms have been proposed in literature, a standard solution still remains unclear. Measurement-based methods use a beam blocker to acquire scatter samples. These techniques have unrivaled advantages over other existing algorithms in that they are simple and efficient, and achieve high scatter estimation accuracy without prior knowledge of the imaged object. Nevertheless, primary signal loss is inevitable in the scatter measurement, and multiple scans or moving the beam blocker during data acquisition are typically employed to compensate for the missing primary data. In this paper, we propose a new measurement-based scatter correction algorithm without primary compensation for full-fan VCT. An accurate reconstruction is obtained with one single-scan and a stationary x-ray beam blocker, two seemingly incompatible features which enable simple and efficient scatter correction without increase of scan time or patient dose. Methods: Based on the CT reconstruction theory, we distribute the blocked data over the projection area where primary signals are considered approximately redundant in a full scan, such that the CT image quality is not degraded even with primary loss. Scatter is then accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction algorithm. Results: The proposed method is evaluated using two phantom studies on a tabletop CBCT system. On the Catphan©600 phantom, our approach reduces the reconstruction error from 207 Hounsfield unit (HU) to 9 HU in the selected region of interest, and improves the image contrast by a factor of 2
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
Monte-Carlo scatter correction for cone-beam computed tomography with limited scan field-of-view
NASA Astrophysics Data System (ADS)
Bertram, Matthias; Sattel, Timo; Hohmann, Steffen; Wiegert, Jens
2008-03-01
In flat detector cone-beam computed tomography (CBCT), scattered radiation is a major source of image degradation, making accurate a posteriori scatter correction inevitable. A potential solution to this problem is provided by computerized scatter correction based on Monte-Carlo simulations. Using this technique, the detected distributions of X-ray scatter are estimated for various viewing directions using Monte-Carlo simulations of an intermediate reconstruction. However, as a major drawback, for standard CBCT geometries and with standard size flat detectors such as mounted on interventional C-arms, the scan field of view is too small to accommodate the human body without lateral truncations, and thus this technique cannot be readily applied. In this work, we present a novel method for constructing a model of the object in a laterally and possibly also axially extended field of view, which enables meaningful application of Monte-Carlo based scatter correction even in case of heavy truncations. Evaluation is based on simulations of a clinical CT data set of a human abdomen, which strongly exceeds the field of view of the simulated C-arm based CBCT imaging geometry. By using the proposed methodology, almost complete removal of scatter-caused inhomogeneities is demonstrated in reconstructed images.
NASA Astrophysics Data System (ADS)
Mobberley, Sean David
Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU
Kim, Kyungsang; Ye, Jong Chul; Lee, Taewon; Cho, Seungryong; Seong, Younghun; Lee, Jongha; Jang, Kwang Eun; Choi, Jaegu; Choi, Young Wook; Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite
Brandenburg, Jan Gerit; Grimme, Stefan
2014-06-05
The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.
NASA Astrophysics Data System (ADS)
Lubberink, Mark; Kosugi, Tsuyoshi; Schneider, Harald; Ohba, Hiroyuki; Bergström, Mats
2004-03-01
A spatially variant convolution subtraction scatter correction was developed for a Hamamatsu SHR-7700 animal PET scanner. This scanner, with retractable septa and a gantry that can be tilted 90°, was designed for studies of conscious monkeys. The implemented dual-exponential scatter kernel takes into account both radiation scattered inside the object and radiation scattered in gantry and detectors. This is necessary because of the relatively large contribution of gantry and detector scatter in this scanner. The correction is used for scatter correction of emission as well as transmission data. Transmission scatter correction using the dual-exponential kernel leads to a measured attenuation coefficient of 0.096 cm-1 in water, compared to 0.089 cm-1 without scatter correction. Scatter correction on both emission and transmission data resulted in a residual correction error of 2.1% in water, as well as improved image contrast and hot spot quantification.
NASA Astrophysics Data System (ADS)
Narasimham, V. L.; Ramachandran, A. S.; Warke, C. S.
1981-02-01
The exchange correction to the differential scattering cross section for the electron-hydrogen-molecule scattering is derived. In the independent scattering center and Glauber approximation our expressions do not agree with those used in the published literature. The overall agreement between the calculated and the measured cross sections improves at higher angles and lower incident electron energies, where the exchange contribution is important.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
SU-E-T-355: Efficient Scatter Correction for Direct Ray-Tracing Based Dose Calculation
Chen, M; Jiang, S; Lu, W
2015-06-15
Purpose: To propose a scatter correction method with linear computational complexity for direct-ray-tracing (DRT) based dose calculation. Due to its speed and simplicity, DRT is widely used as a dose engine in the treatment planning system (TPS) and monitor unit (MU) verification software, where heterogeneity correction is applied by radiological distance scaling. However, such correction only accounts for attenuation but not scatter difference, causing the DRT algorithm less accurate than the model-based algorithms for small field size in heterogeneous media. Methods: Inspired by the convolution formula derived from an exponential kernel as is typically done in the collapsed-cone-convolution-superposition (CCCS) method, we redesigned the ray tracing component as the sum of TERMA scaled by a local deposition factor, which is linear with respect to density, and dose of the previous voxel scaled by a remote deposition factor, D(i)=aρ(i)T(i)+(b+c(ρ(i)-1))D(i-1),where T(i)=e(-αr(i)+β(r(i))2) and r(i)=Σ-(j=1,..,i)ρ(j).The two factors together with TERMA can be expressed in terms of 5 parameters, which are subsequently optimized by curve fitting using digital phantoms for each field size and each beam energy. Results: The proposed algorithm was implemented for the Fluence-Convolution-Broad-Beam (FCBB) dose engine and evaluated using digital slab phantoms and clinical CT data. Compared with the gold standard calculation, dose deviations were improved from 20% to 2% in the low density regions of the slab phantoms for the 1-cm field size, and within 2% for over 95% of the volume with the largest discrepancy at the interface for the clinical lung case. Conclusion: We developed a simple recursive formula for scatter correction for the DRT-based dose calculation with much improved accuracy, especially for small field size, while still keeping calculation to linear complexity. The proposed calculator is fast, yet accurate, which is crucial for dose updating in IMRT
NASA Astrophysics Data System (ADS)
Chen, Jingyi; Zebker, Howard A.; Knight, Rosemary
2015-11-01
Interferometric synthetic aperture radar (InSAR) is a radar remote sensing technique for measuring surface deformation to millimeter-level accuracy at meter-scale resolution. Obtaining accurate deformation measurements in agricultural regions is difficult because the signal is often decorrelated due to vegetation growth. We present here a new algorithm for retrieving InSAR deformation measurements over areas with severe vegetation decorrelation using adaptive phase interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. We apply this algorithm to L-band ALOS interferograms acquired over the San Luis Valley, Colorado, and the Tulare Basin, California. In both areas, the pumping of groundwater for irrigation results in deformation of the land that can be detected using InSAR. We show that the PS-based algorithm can significantly reduce the artifacts due to vegetation decorrelation while preserving the deformation signature.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2012-01-01
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes. PMID:21258140
Hong Xinguo; Hao Quan
2009-01-15
In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.
NASA Astrophysics Data System (ADS)
Hong, Xinguo; Hao, Quan
2009-01-01
In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 °C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.
NASA Astrophysics Data System (ADS)
Römer, Ulrich; Schöps, Sebastian; De Gersem, Herbert
2017-04-01
In electromagnetic simulations of magnets and machines, one is often interested in a highly accurate and local evaluation of the magnetic field uniformity. Based on local post-processing of the solution, a defect correction scheme is proposed as an easy to realize alternative to higher order finite element or hybrid approaches. Radial basis functions (RBFs) are key for the generality of the method, which in particular can handle unstructured grids. Also, contrary to conventional finite element basis functions, higher derivatives of the solution can be evaluated, as required, e.g., for deflection magnets. Defect correction is applied to obtain a solution with improved accuracy and adjoint techniques are used to estimate the remaining error for a specific quantity of interest. Significantly improved (local) convergence orders are obtained. The scheme is also applied to the simulation of a Stern-Gerlach magnet currently in operation.
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset.
A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies
2014-01-01
We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys.1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results
Vincent, Mark A; Hillier, Ian H
2014-08-25
The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.
NASA Astrophysics Data System (ADS)
Moskalensky, Alexander E.; Yurkin, Maxim A.; Konokhova, Anastasiya I.; Strokotov, Dmitry I.; Nekrasov, Vyacheslav M.; Chernyshev, Andrei V.; Tsvetovskaya, Galina A.; Chikova, Elena D.; Maltsev, Valeri P.
2013-01-01
We introduce a novel approach for determination of volume and shape of individual blood platelets modeled as an oblate spheroid from angle-resolved light scattering with flow-cytometric technique. The light-scattering profiles (LSPs) of individual platelets were measured with the scanning flow cytometer and the platelet characteristics were determined from the solution of the inverse light-scattering problem using the precomputed database of theoretical LSPs. We revealed a phenomenon of parameter compensation, which is partly explained in the framework of anomalous diffraction approximation. To overcome this problem, additional a priori information on the platelet refractive index was used. It allowed us to determine the size of each platelet with subdiffraction precision and independent of the particular value of the platelet aspect ratio. The shape (spheroidal aspect ratio) distributions of platelets showed substantial differences between native and activated by 10 μM adenosine diphosphate samples. We expect that the new approach may find use in hematological analyzers for accurate measurement of platelet volume distribution and for determination of the platelet activation efficiency.
NASA Astrophysics Data System (ADS)
Cheng, Ju-Chieh Kevin; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-04-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies. This work was supported by the Canadian Institute of Health Research, a TRIUMF Life Science Grant, the Natural Sciences and Engineering Research Council of Canada UFA (V Sossi) and the Michael Smith Foundation for Health Research Scholarship (V Sossi).
Fullerton, G D; Keener, C R; Cameron, I L
1994-12-01
The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement.
Rescattering corrections and self-consistent metric in planckian scattering
NASA Astrophysics Data System (ADS)
Ciafaloni, M.; Colferai, D.
2014-10-01
Starting from the ACV approach to transplanckian scattering, we present a development of the reduced-action model in which the (improved) eikonal representation is able to describe particles' motion at large scattering angle and, furthermore, UV-safe (regular) rescattering solutions are found and incorporated in the metric. The resulting particles' shock-waves undergo calculable trajectory shifts and time delays during the scattering process — which turns out to be consistently described by both action and metric, up to relative order R 2 /b 2 in the gravitational radius over impact parameter expansion. Some suggestions about the role and the (re)scattering properties of irregular solutions — not fully investigated here — are also presented.
Increasing the imaging depth through computational scattering correction (Conference Presentation)
NASA Astrophysics Data System (ADS)
Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis
2016-03-01
Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.
Experimental Scatter Correction Methods in Industrial X-Ray Cone-Beam CT
NASA Astrophysics Data System (ADS)
Schörner, K.; Goldammer, M.; Stephan, J.
2011-06-01
Scattered radiation presents a major source of image degradation in industrial cone-beam computed tomography systems. Scatter artifacts introduce streaks, cupping and a loss of contrast in the reconstructed CT-volumes. In order to overcome scatter artifacts, we present two complementary experimental correction methods: the beam-stop array (BSA) and an inverse technique we call beam-hole array (BHA). Both correction methods are examined in comparative measurements where it is shown that the aperture-based BHA technique has practical and scatter-reducing advantages over the BSA. The proposed BHA correction method is successfully applied to a large-scale industrial specimen whereby scatter artifacts are reduced and contrast is enhanced significantly.
The correction for multiple scattering of the lidar retrieving in thin clouds
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Vasilyev, Alexander; Samulenkov, Dmitriy; Sapunov, Maxim; Tagaev, Vladislav
2017-02-01
The lidar sounding in the cloudy atmosphere needs accounting the multiple scattering. The standard approach for the retrieval of optical parameters and morphology of aerosol particles might be not sufficient. Here the theoretical analyti cal and numerical methods for calculation of multiple scattering contributions in the backscattered lidar signal are used. The optical thickness of clouds that provokes a distinct multiply scattered light is determined. The possible correction as subtraction of the multiple scattered part from registered signal is proposed for clouds optically thicker than 4. The routine processing is possible for corrected the lidar signal if cloud optically thicker than 4 or without correction if cloud is opt ically thinner than 4. Considered observational data obtained in St. Petersburg lidar station appeared thin enough for application the standard procedure without correction. Optical parameters in and out of cloud are obtained.
Implementation of an efficient Monte Carlo calculation for CBCT scatter correction: phantom study.
Watson, Peter G F; Mainegra-Hing, Ernesto; Tomic, Nada; Seuntjens, Jan
2015-07-08
Cone-beam computed tomography (CBCT) images suffer from poor image quality, in a large part due to contamination from scattered X-rays. In this work, a Monte Carlo (MC)-based iterative scatter correction algorithm was implemented on measured phantom data acquired from a clinical on-board CBCT scanner. An efficient EGSnrc user code (egs_cbct) was used to transport photons through an uncorrected CBCT scan of a Catphan 600 phantom. From the simulation output, the contribution from primary and scattered photons was estimated in each projection image. From these estimates, an iterative scatter correction was performed on the raw CBCT projection data. The results of the scatter correction were compared with the default vendor reconstruction. The scatter correction was found to reduce the error in CT number for selected regions of interest, while improving contrast-to-noise ratio (CNR) by 18%. These results demonstrate the performance of the proposed scatter correction algorithm in improving image quality for clinical CBCT images.
Contribution of Δ(1232) to real photon radiative corrections for elastic electron-proton scattering
NASA Astrophysics Data System (ADS)
Gerasimov, R. E.; Fadin, V. S.
2016-12-01
Here we consider the contribution of the Δ(1232) resonance to the real photon radiative corrections for elastic ep-scattering. The effect is found to be small for past experiments studying the unpolarized cross section, as well as for the recent VEPP-3 experiment investigating two-photon exchange effects by the precision measurement of the {e}+/- p-scattering cross section ratio.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
NASA Astrophysics Data System (ADS)
Annecchione, Maria; Hatch, David; Hefford, Shane W.
2017-01-01
In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for
Scatter correction for large non-human primate brain imaging using microPET
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Lehnert, W.; Banati, R. B.; Meikle, S. R.
2011-04-01
The baboon is well suited to pre-clinical evaluation of novel radioligands for positron emission tomography (PET). We have previously demonstrated the feasibility of using a high resolution animal PET scanner for this application in the baboon brain. However, the non-homogenous distribution of tissue density within the head may give rise to photon scattering effects that reduce contrast and compromise quantitative accuracy. In this study, we investigated the magnitude and distribution of scatter contributing to the final reconstructed image and its variability throughout the baboon brain using phantoms and Monte Carlo simulated data. The scatter fraction is measured up to 36% at the centre of the brain for a wide energy window (350-650 keV) and 19% for a narrow (450-650 keV) window. We observed less than 3% variation in the scatter fraction throughout the brain and found that scattered events arising from radioactivity outside the field of view contribute less than 1% of measured coincidences. In a contrast phantom, scatter and attenuation correction improved contrast recovery compared with attenuation correction on its own and reduced bias to less than 10% at the expense of the reduced signal-to-noise ratio. We conclude that scatter correction is a necessary step for ensuring high quality measurements of the radiotracer distribution in the baboon brain with a microPET scanner, while it is not necessary to model out of field of view scatter or a spatially variant scatter function.
Patient-tailored plate for bone fixation and accurate 3D positioning in corrective osteotomy.
Dobbe, J G G; Vroemen, J C; Strackee, S D; Streekstra, G J
2013-02-01
A bone fracture may lead to malunion of bone segments, which gives discomfort to the patient and may lead to chronic pain, reduced function and finally to early osteoarthritis. Corrective osteotomy is a treatment option to realign the bone segments. In this procedure, the surgeon tries to improve alignment by cutting the bone at, or near, the fracture location and fixates the bone segments in an improved position, using a plate and screws. Three-dimensional positioning is very complex and difficult to plan, perform and evaluate using standard 2D fluoroscopy imaging. This study introduces a new technique that uses preoperative 3D imaging to plan positioning and design a patient-tailored fixation plate that only fits in one way and realigns the bone segments as planned. The method is evaluated using artificial bones and renders realignment highly accurate and very reproducible (d(err) < 1.2 ± 0.8 mm and φ(err) < 1.8° ± 2.1°). Application of a patient-tailored plate is expected to be of great value for future corrective osteotomy surgeries.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality. PMID:25615859
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-01
A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
NASA Astrophysics Data System (ADS)
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.
NASA Astrophysics Data System (ADS)
Tomalak, O.; Vanderhaeghen, M.
2016-01-01
We evaluate the two-photon exchange (TPE) correction to the unpolarized elastic electron-proton scattering at small momentum transfer Q2 . We account for the inelastic intermediate states approximating the double virtual Compton scattering by the unpolarized forward virtual Compton scattering. The unpolarized proton structure functions are used as input for the numerical evaluation of the inelastic contribution. Our calculation reproduces the leading terms in the Q2 expansion of the TPE correction and goes beyond this approximation by keeping the full Q2 dependence of the proton structure functions. In the range of small momentum transfer, our result is in good agreement with the empirical TPE fit to existing data.
Ouyang, L; Yan, H; Jia, X; Jiang, S; Wang, J; Zhang, H
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated by Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.
Triple energy window scatter correction technique in PET.
Shao, L; Freifelder, R; Karp, J S
1994-01-01
A practical triple energy window technique (TEW) is proposed, which is based on using the information in two lower energy windows and one single calibration, to estimate the scatter within the photopeak window. The technique is basically a conventional dual-window technique plus a modification factor, which can partially compensate object-distribution dependent scatters. The modification factor is a function of two lower scatter windows of both the calibration phantom and the actual object. In order to evaluate the technique, a Monte Carlo simulation program, which simulates the PENN-PET scanner geometry, was used. Different phantom activity distributions and phantom sizes were tested to simulate brain studies, including uniform and nonuniform distributions. The results indicate that the TEW technique works well for a wide range of activity distributions and object sizes. The comparisons between the TEW and dual window techniques show better quantitative accuracy for the TEW, especially for different phantom sizes. The technique is also applied to experimental data from a PENN-PET scanner to test its practicality.
NASA Astrophysics Data System (ADS)
Gerasimov, R. E.; Fadin, V. S.
2015-01-01
An analysis of approximations used in calculations of radiative corrections to electron-proton scattering cross section is presented. We investigate the difference between the relatively recent Maximon and Tjon result and the Mo and Tsai result, which was used in the analysis of experimental data. We also discuss the proton form factors ratio dependence on the way we take into account radiative corrections.
Constrained {gamma}Z correction to parity-violating electron scattering
Hall, Nathan Luk; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-11-01
We update the calculation of {gamma}Z interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Constrained γZ correction to parity-violating electron scattering
Hall, N. L.; Thomas, A. W.; Young, R. D.; Blunden, P. G.; Melnitchouk, W.
2013-11-07
We update the calculation of γZ interference corrections to the weak charge of the proton. We show how constraints from parton distributions, together with new data on parity-violating electron scattering in the resonance region, significantly reduce the uncertainties on the corrections compared to previous estimates.
Gerasimov, R. E. Fadin, V. S.
2015-01-15
An analysis of approximations used in calculations of radiative corrections to electron-proton scattering cross section is presented. We investigate the difference between the relatively recent Maximon and Tjon result and the Mo and Tsai result, which was used in the analysis of experimental data. We also discuss the proton form factors ratio dependence on the way we take into account radiative corrections.
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT.
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain.
X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT
Aootaphao, Sorapong; Thongvigitmanee, Saowapak S.; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash
2016-01-01
Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. PMID:27022608
Correction of cross-scatter in next generation dual source CT (DSCT) scanners
NASA Astrophysics Data System (ADS)
Bruder, H.; Stierstorfer, K.; Petersilka, M.; Wiegand, C.; Suess, C.; Flohr, T.
2008-03-01
In dual source CT (DSCT) with two X-ray sources and two data measurement systems mounted on a CT gantry with a mechanical offset of 90 deg, cross scatter radiation, (essentially 90 deg Compton scatter) is added to the detector signals. In current DSCT scanners the cross scatter correction is model based: the idea is to describe the scattering surface in terms of its tangents. The positions of these tangent lines are used to characterize the shape of the scattering object. For future DSCT scanners with larger axial X-ray beams, the model based correction will not perfectly remove the scatter signal in certain clinical situations: for obese patients scatter artifacts in cardiac dual source scan modes might occur. These shortcomings can be circumvented by utilizing the non-diagnostic time windows in cardiac scan modes to detect cross scatter online. The X-ray generators of both systems have to be switched on and off alternating. If one X-ray source is switched off, cross scatter deposited in the respective other detector can be recorded and processed, to be used for efficient cross scatter correction. The procedure will be demonstrated for cardiac step&shoot as well as for spiral acquisitions. Full rotation reconstructions are less sensitive to cross scatter radiation; hence in non-cardiac case the model-based approach is sufficient. Based on measurements of physical and anthropomorphic phantoms we present image data for DSCT systems with various collimator openings demonstrating the efficacy of the proposed method. In addition, a thorough analysis of contrast-to-noise ratio (CNR) shows, that even for a X-ray beam corresponding to a 64x0.6 mm collimation, the maximum loss of CNR due to cross scatter is only about 7% in case of obese patients.
Library-based scatter correction for dedicated cone beam breast CT: a feasibility study
NASA Astrophysics Data System (ADS)
Shi, Linxi; Vedantham, Srinivasan; Karellas, Andrew; Zhu, Lei
2016-04-01
Purpose: Scatter errors are detrimental to cone-beam breast CT (CBBCT) accuracy and obscure the visibility of calcifications and soft-tissue lesions. In this work, we propose practical yet effective scatter correction for CBBCT using a library-based method and investigate its feasibility via small-group patient studies. Method: Based on a simplified breast model with varying breast sizes, we generate a scatter library using Monte-Carlo (MC) simulation. Breasts are approximated as semi-ellipsoids with homogeneous glandular/adipose tissue mixture. On each patient CBBCT projection dataset, an initial estimate of scatter distribution is selected from the pre-computed scatter library by measuring the corresponding breast size on raw projections and the glandular fraction on a first-pass CBBCT reconstruction. Then the selected scatter distribution is modified by estimating the spatial translation of the breast between MC simulation and the clinical scan. Scatter correction is finally performed by subtracting the estimated scatter from raw projections. Results: On two sets of clinical patient CBBCT data with different breast sizes, the proposed method effectively reduces cupping artifact and improves the image contrast by an average factor of 2, with an efficient processing time of 200ms per conebeam projection. Conclusion: Compared with existing scatter correction approaches on CBBCT, the proposed library-based method is clinically advantageous in that it requires no additional scans or hardware modifications. As the MC simulations are pre-computed, our method achieves a high computational efficiency on each patient dataset. The library-based method has shown great promise as a practical tool for effective scatter correction on clinical CBBCT.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e– e– → e– e– (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; Zykunov, Vladimir
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e^{–} e^{–} → e^{–} e^{–} (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4
NASA Astrophysics Data System (ADS)
Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.
2005-10-01
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.
Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.
Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D
2005-10-21
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.
Park, Yang-Kyun; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.
2015-01-01
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCTus) and a priori CT-based scatter correction (CBCTap). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCTus, while no HU change was applied to the CBCTap. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CTref) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCTap was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCTus images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CTref, while the CBCTap images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCTap-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms. PMID:26233175
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were
NASA Astrophysics Data System (ADS)
Camp, Charles H., Jr.; Lee, Young Jong; Cicerone, Marcus T.
2016-04-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download.
[Correction Method of Atmospheric Scattering Effect Based on Three Spectrum Bands].
Ye, Han-han; Wang, Xian-hua; Jiang, Xin-hua; Bu, Ting-ting
2016-03-01
As a major error of CO2 retrieval, atmospheric scattering effect hampers the application of satellite products. Effect of aerosol and combined effect of aerosol and ground surface are important source of atmospheric scattering, so it needs comprehensive consideration of scattering effect from aerosol and ground surface. Based on the continuum, strong and weak absorption part of three spectrum bands O2-A, CO2 1.6 μm and 2.06 μm, information of aerosol and albedo was analyzed, and improved full physics retrieval method was proposed, which can retrieve aerosol and albedo simultaneously to correct the scattering effect. Simulation study on CO2 error caused by aerosol and ground surface albedo CO2 error by correction method was carried out. CO2 error caused by aerosol optical depth and ground surface albedo can reach up to 8%, and CO2 error caused by different types of aerosol can reach up to 10%, while these two types of error can be controlled within 1% and 2% separately by this correction method, which shows that the method can correct the scattering effect effectively. Through evaluation of the results, the potential of this method for high precision satellite data retrieval is obvious, meanwhile, some problems which need to be noticed in real application were also pointed out.
Methods for correcting microwave scattering and emission measurements for atmospheric effects
NASA Technical Reports Server (NTRS)
Komen, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Algorithms were developed to permit correction of scattering coefficient and brightness temperature for the Skylab S193 Radscat for the effects of cloud attenuation. These algorithms depend upon a measurement of the vertically polarized excess brightness temperature at 50 deg incidence angle. This excess temperature is converted to an equivalent 50 deg attenuation, which may then be used to estimate the horizontally polarized excess brightness temperature and reduced scattering coefficient at 50 deg. For angles other than 50 deg, the correction also requires use of the variation of emissivity with salinity and water temperature.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-07
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Effective field theory for large logarithms in radiative corrections to electron proton scattering
NASA Astrophysics Data System (ADS)
Hill, Richard J.
2017-01-01
Radiative corrections to elastic electron proton scattering are analyzed in effective field theory. A new factorization formula identifies all sources of large logarithms in the limit of large momentum transfer, Q2≫me2. Explicit matching calculations are performed through two-loop order. A renormalization analysis in soft-collinear effective theory is performed to systematically compute and resum large logarithms. Implications for the extraction of charge radii and other observables from scattering data are discussed. The formalism may be applied to other lepton-nucleon scattering and e+e- annihilation processes.
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei
2010-02-15
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an
Radiative Corrections for Lepton-Proton Scattering: When the Mass Matters
NASA Astrophysics Data System (ADS)
Afanasev, Andrei
2015-04-01
Radiative corrections procedures for electron-proton and muon-proton scattering are well established under the assumption that the leptons are considered in an ultra-relativistic approximation. MUSE experiment at PSI and COMPASS experiment at CERN entered the regions of kinematics where explicit dependence of radiative corrections on the lepton mass becomes important. MUSE will consider the scattering of muons with momenta of the order 100 MeV/c, therefore lepton mass corrections become important for the entire kinematic domain. COMPASS experiment uses scattering of 100 GeV/c muons, and the muon mass effects are especially relevant in the quasi-real photo production limit, Q2 --> 0. A dedicated Monte Carlo generator of radiative events is being developed for MUSE, which also includes effects of interference between the lepton and proton bremsstrahlung. Parts of the radiative corrections are expected to be suppressed for muons due to the larger muon mass. Two-photon exchange corrections are generally expected to be small, and should be similar for electrons and muons. We classify the radiative corrections into two categories, C-even and C-odd under the lepton charge reversal, and discuss their role separately for the above experiments.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
Declared North Korean nuclear tests in 2006, 2009, 2013, and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-dimensional global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25% shorter than the distances between events estimated using regional Pn phases. The 2009, 2013, and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of meters. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio, and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-d velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The
ERIC Educational Resources Information Center
Sheen, Younghee; Wright, David; Moldawa, Anna
2009-01-01
Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…
Scatter correction for cone-beam computed tomography using moving blocker strips
NASA Astrophysics Data System (ADS)
Wang, Jing; Mao, Weihua; Solberg, Timothy
2011-03-01
One well-recognized challenge of cone-beam computed tomography (CBCT) is the presence of scatter contamination within the projection images. Scatter degrades the CBCT image quality by decreasing the contrast, introducing shading artifacts and leading to inaccuracies in the reconstructed CT number. We propose a blocker-based approach to simultaneously estimate scatter signal and reconstruct the complete volume within the field of view (FOV) from a single CBCT scan. A physical strip attenuator (i.e., "blocker"), consists of lead strips, is inserted between the x-ray source and the patient. The blocker moves back and forth along z-axis during the gantry rotation. The two-dimensional (2D) scatter fluence is estimated by interpolating the signal from the blocked regions. A modified Feldkamp-Davis-Kress (FDK) algorithm and an iterative reconstruction based on the constraint optimization are used to reconstruct CBCT images from un-blocked projection data after the scatter signal is subtracted. An experimental study is performed to evaluate the performance of the proposed scatter correction scheme. The scatter-induced shading/cupping artifacts are substantially reduced in CBCT using the proposed strategy. In the experimental study using a CatPhan©600 phantom, CT number errors in the selected regions of interest are reduced from 256 to less than 20. The proposed method allows us to simultaneously estimate the scatter signal in projection data, reduce the imaging dose and obtain complete volumetric information within the FOV.
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
QCD CORRECTIONS TO DILEPTON PRODUCTION NEAR PARTONIC THRESHOLD IN PP SCATTERING.
SHIMIZU, H.; STERMAN, G.; VOGELSANG, W.; YOKOYA, H.
2005-10-02
We present a recent study of the QCD corrections to dilepton production near partonic threshold in transversely polarized {bar p}p scattering, We analyze the role of the higher-order perturbative QCD corrections in terms of the available fixed-order contributions as well as of all-order soft-gluon resummations for the kinematical regime of proposed experiments at GSI-FAIR. We find that perturbative corrections are large for both unpolarized and polarized cross sections, but that the spin asymmetries are stable. The role of the far infrared region of the momentum integral in the resummed exponent and the effect of the NNLL resummation are briefly discussed.
Lu, Yong-jun; Qu, Yan-ling; Feng, Zhi-qing; Song, Min
2007-01-01
Multiple scattering correction(MSC) algorithm can be used effectively to remove the effect of scattering due to the physical factors such as the density and humidity of sample granule, and as a result the ratio of signal to noise is improved greatly. Meantime correlation spectrum plays a important role in the choice of optimum wavelength set because it describes the linear correlationship between the absorbance and concentration of the sample's ingredient under analysis. However, the correlation spectrum obtained by unitary linear regression(ULR) at single wavelength channel can be easily affected by the scattering so as to cover up the characteristic linear information of the sample. In order to solve the problem in the present paper MSC was applied to obtain useful signal and suppress noise of correlation spectrum. Through the careful calibration experiment of ginseng sample this idea has proved to be correct, and satisfactory result was obtained.
NASA Astrophysics Data System (ADS)
Hassanein, René; de Beer, Frikkie; Kardjilov, Nikolay; Lehmann, Eberhard
2006-11-01
A precise quantitative analysis with the neutron radiography technique of materials with a high-neutron scattering cross section, imaged at small distances from the detector, is impossible if the scattering contribution from the investigated material onto the detector is not eliminated in the right way. Samples with a high-neutron scattering cross section, e.g. hydrogenous materials such as water, cause a significant scattering component in their radiographs. Background scattering, spectral effects and detector characteristics are identified as additional causes for disturbances. A scattering correction algorithm based on Monte Carlo simulations has been developed and implemented to take these effects into account. The corrected radiographs can be used for a subsequent tomographic reconstruction. From the results one can obtain quantitative information, in order to detect e.g. inhomogeneity patterns within materials, or to measure differences of the mass thickness in these materials. Within an IAEA-CRP collaboration the algorithms have been tested for applicability on results obtained at the South African SANRAD facility at Necsa, the Swiss NEUTRA facilities at PSI as well as the German CONRAD facility at HMI, all with different initial neutron spectra. Results of a set of dedicated neutron radiography experiments are being reported.
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
Exchange current corrections to neutrino-nucleus scattering. I. Nuclear matter
NASA Astrophysics Data System (ADS)
Umino, Y.; Udias, J. M.
1995-12-01
Relativistic exchange current corrections to the impulse approximation in low and intermediate energy neutrino-nucleus scattering are presented assuming nonvanishing strange quark form factors for constituent nucleons. Two-body exchange current operators which treat all SU(3) vector and axial currents on an equal footing are constructed by generalizing the soft-pion dominance method of Chemtob and Rho. For charged current reactions, exchange current corrections can reduce the impulse approximation results by 5 to 10 % depending on the nuclear density. A finite strange quark form factor may change the total cross section for neutral current scattering by 20% while exchange current corrections are found to be sensitive to the nuclear density. Implications on the current LSND experiment to extract the strange quark axial form factor of the nucleon are discussed.
Lu, Yong-jun; Qu, Yan-ling; Song, Min
2007-05-01
Correlation spectroscopy can be used to describe the linear correlationship between the absorbance and concentration data in the whole spectra range and clearly figure out the characteristic peak position of the sample under test. Meantime, this chart plays an extremely important role in offering the precise information for choosing the optimal wavelength set during the calibration process. Multiple scatter correct (MSC) spectroscopy is a kind of multiple variable scatter correction technique, and can effectively remove the base shift and tilt phenomenon caused by MSC. As a result, the ratio of signal to noise is improved greatly. Based on this feature, the new idea of the MSC technique was introduced into the preceding data treatment for the creation of correlation chart, and through careful experiment this idea was proved to be correct and effective.
Constrained gamma-Z interference corrections to parity-violating electron scattering
Hall, Nathan Luke; Blunden, Peter Gwithian; Melnitchouk, Wally; Thomas, Anthony W.; Young, Ross D.
2013-07-01
We present a comprehensive analysis of gamma-Z interference corrections to the weak charge of the proton measured in parity-violating electron scattering, including a survey of existing models and a critical analysis of their uncertainties. Constraints from parton distributions in the deep-inelastic region, together with new data on parity-violating electron scattering in the resonance region, result in significantly smaller uncertainties on the corrections compared to previous estimates. At the kinematics of the Qweak experiment, we determine the gamma-Z box correction to be Re\\box_{gamma-Z}^V = (5.61 +- 0.36) x 10^{-3}. The new constraints also allow precise predictions to be made for parity-violating deep-inelastic asymmetries on the deuteron.
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Park, Yang-Kyun Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.
2015-08-15
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCT{sub us}) and a priori CT-based scatter correction (CBCT{sub ap}). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCT{sub us}, while no HU change was applied to the CBCT{sub ap}. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CT{sub ref}) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCT{sub ap} was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCT{sub us} images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CT{sub ref}, while the CBCT{sub ap} images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCT{sub ap}-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
Two-photon exchange corrections in elastic lepton-proton scattering at small momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
In recent years, elastic electron-proton scattering experiments, with and without polarized protons, gave strikingly different results for the electric over magnetic proton form factor ratio. A mysterious discrepancy (``the proton radius puzzle'') has been observed in the measurement of the proton charge radius in muon spectroscopy experiments versus electron spectroscopy and electron scattering. Two-photon exchange (TPE) contributions are the largest source of the hadronic uncertainty in these experiments. We compare the existing models of the elastic contribution to TPE correction in lepton-proton scattering. A subtracted dispersion relation formalism for the TPE in electron-proton scattering has been developed and tested. Its relative effect on cross section is in the 1 - 2 % range for a low value of the momentum transfer. An alternative dispersive evaluation of the TPE correction to the hydrogen hyperfine splitting was found and applied. For the inelastic TPE contribution, the low momentum transfer expansion was studied. In addition with the elastic TPE it describes the experimental TPE fit to electron data quite well. For a forthcoming muon-proton scattering experiment (MUSE) the resulting TPE was found to be in the 0 . 5 - 1 % range, which is the planned accuracy goal.
Evaluation of an erbium modulator in x-ray scatter correction using primary modulation
NASA Astrophysics Data System (ADS)
Gao, Hewei; Niu, Tianye; Zhu, Lei; Fahrig, Rebecca
2011-03-01
A primary modulator made of erbium is evaluated in X-ray scatter correction using primary modulation. Our early studies have shown that erbium is the optimal modulator material for an X-ray cone-beam computed tomography (CBCT) system operated at 120 kVp, exhibiting minimum beam hardening which otherwise weakens the modulator's ability to separate scatter from primary. In this work, the accuracy of scatter correction is compared for two copper modulators (105 and 210 μm of thickness) and one erbium modulator (25.4 μm of thickness) with the same modulation frequencies. The variations in the effective transmission factors of these three modulators as functions of object filtrations are first measured to show the magnitudes of beam hardening caused by the modulators themselves. Their scatter correction performances are then tested using a Catphan©600 phantom on our tabletop CBCT system. With and without 300 μm of copper in the beam, the measured variations for these three modulators are 4.3%, 7.8%, and 0.9%, respectively. Using the 105- and 210-μm copper modulators, our scatter correction method reduces the average CT number error from 327.3 Hounsfield units (HU) to 19.4 and 20.9 HU in the selected regions of interest, and enhances the contrast-to-noise ratio (CNR) from 10.7 to 16.5 and 15.9, respectively. With the 25.4-μm erbium modulator, the CT number error is markedly reduced to 2.8 HU and the CNR is further increased to 17.4.
Accurate measurement of the x-ray coherent scattering form factors of tissues
NASA Astrophysics Data System (ADS)
King, Brian W.
The material dependent x-ray scattering properties of tissues are determined by their scattering form factors, measured as a function of the momentum transfer argument, x. Incoherent scattering form factors, Finc, are calculable for all values of x while coherent scattering form factors, Fcoh, cannot be calculated except at large C because of their dependence on long range order. As a result, measuring Fcoh is very important to the developing field of x-ray scatter imaging. Previous measurements of Fcoh, based on crystallographic techniques, have shown significant variability, as these methods are not optimal for amorphous materials. Two methods of measuring F coh, designed with amorphous materials in mind, are developed in this thesis. An angle-dispersive technique is developed that uses a polychromatic x-ray beam and a large area, energy-insensitive detector. It is shown that Fcoh can be measured in this system if the incident x-ray spectrum is known. The problem is ill-conditioned for typical x-ray spectra and two numerical methods of dealing with the poor conditioning are explored. It is shown that these techniques work best with K-edge filters to limit the spectral width and that the accuracy degrades for strongly ordered materials. Measurements of width Fcoh for water samples are made using 50, 70 and 92 kVp spectra. The average absolute relative difference in Fcoh between our results and the literature for water is approximately 10-15%. Similar measurements for fat samples were made and found to be qualitatively similar to results in the literature, although there is very large variation between the literature values in this case. The angle-dispersive measurement is limited to low resolution measurements of the coherent scattering form factor although it is more accessible than traditional measurements because of the relatively commonplace equipment requirements. An energy-dispersive technique is also developed that uses a polychromatic x-ray beam and an
Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R
2008-12-01
Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries.
First Order QED Corrections to the Parity-Violating Asymmetry in Moller Scattering
Zykunov, Vladimir A.; Suarez, Juan; Tweedie, Brock A.; Kolomensky, Yury G.; /UC, Berkeley
2005-08-15
We compute a full set of the first order QED corrections to the parity-violating observables in polarized Moeller scattering. We employ a covariant method of removing infrared divergences, computing corrections without introducing any unphysical parameters. When applied to the kinematics of the SLAC E158 experiment, the QED corrections reduce the parity violating asymmetry by 4.5%. We combine our results with the previous calculations of the first-order electroweak corrections and obtain the complete {Omicron}({alpha}) prescription for relating the experimental asymmetry A{sub LR} to the low-energy value of the weak mixing angle sin{sup 2} {theta}{sub W}. Our results are applicable to the recent measurement of A{sub LR} by the SLAC E158 collaboration, as well as to the future parity violation experiments.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-03-23
In positron emission tomography (PET), corrections for photon scatter and attenuation are essential for visual and quantitative consistency. Magnetic resonance attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offers limited accuracy compared to computed tomography attenuation correction (CTAC). Potential inaccuracies in MRAC may affect scatter correction, as the attenuation image (µ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction by using two scatter correction techniques and three µ-maps for MRAC. Methods: The SSS and a Monte Carlo - based single scatter simulation (MCSSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with one CT-based and two MR-based µ-maps. Data from seven subjects were used in the clinical evaluation while a phantom study using an anatomical brain phantom was conducted. Scatter correction sinograms were evaluated for each scatter correction method and µ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume of interest (VOI) and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the µ-map used. SSS showed slightly higher absolute quantification. The differences in VOI analysis between SSS and MCSSS were 3 % at maximum in the phantom and 4 % in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MCSSS showed a slight dependency on the µ-map used, with difference of 2 % on average and 4 % at maximum when using a µ-map without bone. Conclusion: The effect of different MR-based µ-maps to the performance of scatter correction was
Weak charge of the proton: loop corrections to parity-violating electron scattering
Wally Melnitchouk
2011-05-01
I review the role of two-boson exchange corrections to parity-violating elastic electron–proton scattering. Direct calculations of contributions from nucleon and Delta intermediate states show generally small, [script O](1–2%), effects over the range of kinematics relevant for proton strangeness form factor measurements. For the forward angle Qweak experiment at Jefferson Lab, which aims to measure the weak charge of the proton, corrections from the gammaZ box diagram are computed within a dispersive approach and found to be sizable at the E~1 GeV energy scale of the experiment.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering
NASA Astrophysics Data System (ADS)
Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc
2015-03-01
Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.
Two-photon exchange correction to muon-proton elastic scattering at low momentum transfer
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-03-01
We evaluate the two-photon exchange (TPE) correction to the muon-proton elastic scattering at small momentum transfer. Besides the elastic (nucleon) intermediate state contribution, which is calculated exactly, we account for the inelastic intermediate states by expressing the TPE process approximately through the forward doubly virtual Compton scattering. The input in our evaluation is given by the unpolarized proton structure functions and by one subtraction function. For the latter, we provide an explicit evaluation based on a Regge fit of high-energy proton structure function data. It is found that, for the kinematics of the forthcoming muon-proton elastic scattering data of the MUSE experiment, the elastic TPE contribution dominates, and the size of the inelastic TPE contributions is within the anticipated error of the forthcoming data.
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
Menegotti, L.; Delana, A.; Martignano, A.
2008-07-15
Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.
Development of Filtered Rayleigh Scattering for Accurate Measurement of Gas Velocity
NASA Technical Reports Server (NTRS)
Miles, Richard B.; Lempert, Walter R.
1995-01-01
The overall goals of this research were to develop new diagnostic tools capable of capturing unsteady and/or time-evolving, high-speed flow phenomena. The program centers around the development of Filtered Rayleigh Scattering (FRS) for velocity, temperature, and density measurement, and the construction of narrow linewidth laser sources which will be capable of producing an order MHz repetition rate 'burst' of high power pulses.
Compartment modeling of dynamic brain PET—The impact of scatter corrections on parameter errors
Häggström, Ida Karlsson, Mikael; Larsson, Anne; Schmidtlein, C. Ross
2014-11-01
Purpose: The aim of this study was to investigate the effect of scatter and its correction on kinetic parameters in dynamic brain positron emission tomography (PET) tumor imaging. The 2-tissue compartment model was used, and two different reconstruction methods and two scatter correction (SC) schemes were investigated. Methods: The GATE Monte Carlo (MC) software was used to perform 2 × 15 full PET scan simulations of a voxelized head phantom with inserted tumor regions. The two sets of kinetic parameters of all tissues were chosen to represent the 2-tissue compartment model for the tracer 3′-deoxy-3′-({sup 18}F)fluorothymidine (FLT), and were denoted FLT{sub 1} and FLT{sub 2}. PET data were reconstructed with both 3D filtered back-projection with reprojection (3DRP) and 3D ordered-subset expectation maximization (OSEM). Images including true coincidences with attenuation correction (AC) and true+scattered coincidences with AC and with and without one of two applied SC schemes were reconstructed. Kinetic parameters were estimated by weighted nonlinear least squares fitting of image derived time–activity curves. Calculated parameters were compared to the true input to the MC simulations. Results: The relative parameter biases for scatter-eliminated data were 15%, 16%, 4%, 30%, 9%, and 7% (FLT{sub 1}) and 13%, 6%, 1%, 46%, 12%, and 8% (FLT{sub 2}) for K{sub 1}, k{sub 2}, k{sub 3}, k{sub 4}, V{sub a}, and K{sub i}, respectively. As expected, SC was essential for most parameters since omitting it increased biases by 10 percentage points on average. SC was not found necessary for the estimation of K{sub i} and k{sub 3}, however. There was no significant difference in parameter biases between the two investigated SC schemes or from parameter biases from scatter-eliminated PET data. Furthermore, neither 3DRP nor OSEM yielded the smallest parameter biases consistently although there was a slight favor for 3DRP which produced less biased k{sub 3} and K{sub i
Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice
2015-06-01
Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.
NASA Astrophysics Data System (ADS)
Satyanarayana, M. V.; Radhakrishnan, S. R.; Mahadevanpillai, V. P.; Krishnakumar, V.
2008-12-01
Lidar has proven to be an effective instrument for obtaining high resolution profiles of atmospheric aerosols. Deriving the optical properties of aerosols from the experimentally obtained lidar data is one of the most interesting and challenging task for the atmospheric scientists. A few methods had been developed so far, to obtain the quantitative profiles of extinction and backscattering coefficient of aerosols from the pulsed backscattering lidar measurements. Most of the existing inversion methods assume a range independent value for the scattering ratio for inverting the lidar signal even though it is known that the scattering ratio depends on the nature of aerosols and as such range dependent. We used a modified Klett's method for the inversion of lidar signal that uses range dependent scattering ratio (s) for the characterization of atmospheric aerosols. This method provides the constants k and s for all the altitude regions of the atmosphere and leads to derive the aerosol extinction profile for the lidar data. In this paper we made a study on the errors involved in the extinction profiles derived using the range dependent scattering ratio and discuss the approach in this regard to obtain the accurate extinction profiles.
Next-to-soft corrections to high energy scattering in QCD and gravity
NASA Astrophysics Data System (ADS)
Luna, A.; Melville, S.; Naculich, S. G.; White, C. D.
2017-01-01
We examine the Regge (high energy) limit of 4-point scattering in both QCD and gravity, using recently developed techniques to systematically compute all corrections up to next-to-leading power in the exchanged momentum i.e. beyond the eikonal approximation. We consider the situation of two scalar particles of arbitrary mass, thus generalising previous calculations in the literature. In QCD, our calculation describes power-suppressed corrections to the Reggeisation of the gluon. In gravity, we confirm a previous conjecture that next-to-soft corrections correspond to two independent deflection angles for the incoming particles. Our calculations in QCD and gravity are consistent with the well-known double copy relating amplitudes in the two theories.
X-ray scatter correction method for dedicated breast computed tomography
Sechopoulos, Ioannis
2012-05-15
Purpose: To improve image quality and accuracy in dedicated breast computed tomography (BCT) by removing the x-ray scatter signal included in the BCT projections. Methods: The previously characterized magnitude and distribution of x-ray scatter in BCT results in both cupping artifacts and reduction of contrast and accuracy in the reconstructions. In this study, an image processing method is proposed that estimates and subtracts the low-frequency x-ray scatter signal included in each BCT projection postacquisition and prereconstruction. The estimation of this signal is performed using simple additional hardware, one additional BCT projection acquisition with negligible radiation dose, and simple image processing software algorithms. The high frequency quantum noise due to the scatter signal is reduced using a noise filter postreconstruction. The dosimetric consequences and validity of the assumptions of this algorithm were determined using Monte Carlo simulations. The feasibility of this method was determined by imaging a breast phantom on a BCT clinical prototype and comparing the corrected reconstructions to the unprocessed reconstructions and to reconstructions obtained from fan-beam acquisitions as a reference standard. One-dimensional profiles of the reconstructions and objective image quality metrics were used to determine the impact of the algorithm. Results: The proposed additional acquisition results in negligible additional radiation dose to the imaged breast ({approx}0.4% of the standard BCT acquisition). The processed phantom reconstruction showed substantially reduced cupping artifacts, increased contrast between adipose and glandular tissue equivalents, higher voxel value accuracy, and no discernible blurring of high frequency features. Conclusions: The proposed scatter correction method for dedicated breast CT is feasible and can result in highly improved image quality. Further optimization and testing, especially with patient images, is necessary to
Correction of radiation absorption on biological samples using Rayleigh to Compton scattering ratio
NASA Astrophysics Data System (ADS)
Pereira, Marcelo O.; Conti, Claudio de Carvalho; dos Anjos, Marcelino J.; Lopes, Ricardo T.
2012-06-01
The aim of this work was to develop a method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E < 30 keV) applied to a biological matrix based on the Rayleigh to Compton scattering ratio and the effective atomic number. For calibration, scattering measurements were performed on standard samples of radiation produced by a gamma-ray source of 241Am (59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 1557B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C), in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples.
More accurate X-ray scattering data of deeply supercooled bulk liquid water
Neuefeind, Joerg C; Benmore, Chris J; Weber, Richard; Paschek, Dietmar
2011-01-01
Deeply supercooled water droplets held container-less in an acoustic levitator are investigated with high energy X-ray scattering. The temperature dependence X-ray structure function is found to be non-linear. Comparison with two popular computer models reveals that structural changes are predicted too abrupt by the TIP5P model, while the rate of change predicted by TIP4P is in much better agreement with experiment. The abrupt structural changes predicted by the TIP5P model to occur in the temperature range between 260-240K as water approaches the homogeneous nucleation limit are unrealistic. Both models underestimate the distance between neighbouring oxygen atoms and overestimate the sharpness of the OO distance distribution, indicating that the strength of the H-bond is overestimated in these models.
Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng
2014-02-15
We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70 dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π.
Two-photon exchange corrections in elastic lepton-proton scattering
NASA Astrophysics Data System (ADS)
Tomalak, Oleksandr; Vanderhaeghen, Marc
2016-09-01
In recent years, two experimental approaches, with and without polarized protons, gave strikingly different results for the ratio of the electric to magnetic proton form factors. Even more recently, a mysterious discrepancy (``the proton radius puzzle'') has been observed in the extraction of the proton charge radius from the muonic hydrogen versus regular hydrogen and electron-proton scattering. Two-photon exchange (TPE) contributions are the largest source of the hadronic uncertainty in these experiments. To determine TPE corrections to the S level in muonic hydrogen, the forward virtual Compton scattering is calculated within dispersion relation (DR) formalism. Comparing a box graph model with the DRs at fixed low momentum transfer, we develop and test the subtracted DR formalism for TPE in electron-proton scattering. Its relative effect on the cross section is in the 1 - 2 % range. We include the inelastic states both in the approximation of the near-forward unpolarized virtual Compton scattering as well as using the empirical information on the πN states contribution. We compare the resulting TPE with MAMI, VEPP-3 and CLAS data, and make predictions for the OLYMPUS and the forthcoming MUSE experiments.
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Szidarovszky, Tamás; Császár, Attila G
2015-01-07
The total partition functions QT and their first two moments Q(')T and Q(″)T, together with the isobaric heat capacities CpT, are computed a priori for three major MgH isotopologues on the temperature range of T = 100-3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to QT at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q(″)T and CpT, principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of (24)MgH, (25)MgH, and (26)MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
Validation of a Method to Accurately Correct Anterior Superior Iliac Spine Marker Occlusion
Hoffman, Joshua T.; McNally, Michael P.; Wordeman, Samuel C.; Hewett, Timothy E.
2015-01-01
Anterior superior iliac spine (ASIS) marker occlusion commonly occurs during three-dimensional (3-D) motion capture of dynamic tasks with deep hip flexion. The purpose of this study was to validate a universal technique to correct ASIS occlusion. 420ms of bilateral ASIS marker occlusion was simulated in fourteen drop vertical jump (DVJ) trials (n=14). Kinematic and kinetic hip data calculated for pelvic segments based on iliac crest (IC) marker and virtual ASIS (produced by our algorithm and a commercial virtual join) trajectories was compared to true ASIS marker tracking data. Root mean squared errors (RMSEs; mean ± standard deviation) and intra-class correlations (ICCs) between pelvic tracking based on virtual ASIS trajectories filled by our algorithm and true ASIS position were 2.3±0.9° (ICC=0.982) flexion/extension, 0.8±0.2° (ICC=0.954) abduction/adduction for hip angles, and 0.40±0.17N-m (ICC=1.000) and 1.05±0.36N-m (ICC=0,998) for sagittal and frontal plane moments. RMSEs for IC pelvic tracking were 6.9±1.8° (ICC=0.888) flexion/extension, 0.8±0.3° (ICC=0.949) abduction/adduction for hip angles, and 0.31±0.13N-m (ICC=1.00) and 1.48±0.69N-m (ICC=0.996) for sagittal and frontal plane moments. Finally, the commercially-available virtual join demonstrated RMSEs of 4.4±1.5° (ICC=0.945) flexion/extension, 0.7±0.2° (ICC=0.972) abduction/adduction for hip angles, and 0.97±0.62N-m (ICC=1.000) and 1.49±0.67N-m (ICC=0.996) for sagittal and frontal plane moments. The presented algorithm exceeded the a priori ICC cutoff of 0.95 for excellent validity and is an acceptable tracking alternative. While ICCs for the commercially available virtual join did not exhibit excellent correlation, good validity was observed for all kinematics and kinetics. IC marker pelvic tracking is not a valid alternative. PMID:25704531
Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.; Valtonen, E.
2015-06-10
To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energetic protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; Casper, Steven W.; Melnitchouk, Wally
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q^{2}. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
γZ corrections to forward-angle parity-violating ep scattering
Alex Sibirtsev; Blunden, Peter G.; Melnitchouk, Wally; ...
2010-07-30
We use dispersion relations to evaluate the γZ box contribution to parity-violating electron scattering in the forward limit, taking into account constraints from recent JLab data on electroproduction in the resonance region as well as high energy data from HERA. The correction to the asymmetry is found to be 1.2 +- 0.2% at the kinematics of the JLab Qweak experiment, which is well within the limits required to achieve a 4% measurement of the weak charge of the proton.
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-08-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR)—a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al 2006 Med. Phys. 33 187-97) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is
A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis
Lu, Yihuan; Peng, Boyu; Lau, Beverly A.; Hu, Yue-Houng; Scaduto, David A.; Zhao, Wei; Gindi, Gene
2015-01-01
Contrast-enhanced dual energy digital breast tomosynthesis (CE-DE-DBT) is designed to image iodinated masses while suppressing breast anatomical background. Scatter is a problem, especially for high energy acquisition, in that it causes severe cupping artifact and iodine quantitation errors. We propose a patient specific scatter correction (SC) algorithm for CE-DE-DBT. The empirical algorithm works by interpolating scatter data outside the breast shadow into an estimate within the breast shadow. The interpolated estimate is further improved by operations that use an easily obtainable (from phantoms) table of scatter-to-primary-ratios (SPR) - a single SPR value for each breast thickness and acquisition angle. We validated our SC algorithm for two breast emulating phantoms by comparing SPR from our SC algorithm to that measured using a beam-passing pinhole array plate. The error in our SC computed SPR, averaged over acquisition angle and image location, was about 5%, with slightly worse errors for thicker phantoms. The SC projection data, reconstructed using OS-SART, showed a large degree of decupping. We also observed that SC removed the dependence of iodine quantitation on phantom thickness. We applied the SC algorithm to a CE-DE-mammographic patient image with a biopsy confirmed tumor at the breast periphery. In the image without SC, the contrast enhanced tumor was masked by the cupping artifact. With our SC, the tumor was easily visible. An interpolation-based SC was proposed by (Siewerdsen et al., 2006) for cone-beam CT (CBCT), but our algorithm and application differ in several respects. Other relevant SC techniques include Monte-Carlo and convolution-based methods for CBCT, storage of a precomputed library of scatter maps for DBT, and patient acquisition with a beam-passing pinhole array for breast CT. Our SC algorithm can be accomplished in clinically acceptable times, requires no additional imaging hardware or extra patient dose and is easily transportable
Energy-based scatter correction for 3-D PET scanners using NaI(T1) detectors.
Adam, L E; Karp, J S; Freifelder, R
2000-05-01
Earlier investigations with BGO positron emission tomography (PET) scanners showed that the scatter correction technique based on multiple acquisitions with different energy windows are problematic to implement because of the poor energy resolution of BGO (22%), particularly for whole-body studies. We believe that these methods are likely to work better with NaI(TI) because of the better energy resolution achievable with NaI(TI) detectors (10%). Therefore, we investigate two different choices for the energy window, a low-energy window (LEW) on the Compton spectrum at 400-450 keV, and a high-energy window (HEW) within the photopeak (lower threshold above 511 keV). The results obtained for our three-dimensional (3-D) (septa-less) whole-body scanners [axial field of view (FOV) of 12.8 cm and 25.6 cm] as well as for our 3-D brain scanner (axial FOV of 25.6 cm) show an accurate prediction of the scatter distribution for the estimation of trues method (ETM) using a HEW, leading to a significant reduction of the scatter contamination. The dual-energy window (DEW) technique using a LEW is shown to be intrinsically wrong; in particular, it fails for line source and bar phantom measurements. However, the method is able to produce good results for homogeneous activity distributions. Both methods are easy to implement, are fast, have a low noise propagation, and will be applicable to other PET scanners with good energy resolution and stability, such as hybrid NaI(TI) PET/SPECT dual-head cameras and future PET cameras with GSO or LSO scintillators.
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
NASA Astrophysics Data System (ADS)
Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.
2017-02-01
The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.
NASA Astrophysics Data System (ADS)
Gao, Hewei; Zhu, Lei; Fahrig, Rebecca
2010-04-01
The impact of the system parameters of the modulator on X-ray scatter correction using primary modulation is studied and an optimization of the modulator design is presented. Recently, a promising scatter correction method for X-ray computed tomography (CT) that uses a checkerboard pattern of attenuating blockers (primary modulator) placed between the X-ray source and the object has been developed and experimentally verified. The blocker size, d, and the blocker transmission factor, α, are critical to the performance of the primary modulation method. In this work, an error caused by aliasing of primary whose magnitude depends on the choices of d and α, and the scanned object, is set as the object function to be minimized, with constraints including the X-ray focal spot, the physical size of the detector element, and the noise level. The optimization is carried out in two steps. In the first step, d is chosen as small as possible but should meet a lower-bound condition. In the second step, α should be selected to balance the error level in the scatter estimation and the noise level in the reconstructed image. The lower bound of d on our tabletop CT system is 0.83 mm. Numerical simulations suggest 0.6 < α < 0.8 is appropriate. Using a Catphan 600 phantom, a copper modulator (d = 0.89 mm, α = 0.70) expectedly outperforms an aluminum modulator (d = 2.83 mm, α = 0.90). With the aluminum modulator, our method reduces the average error of CT number in selected contrast rods from 371.4 to 25.4 Hounsfield units (HU) and enhances the contrast to noise ratio (CNR) from 10.9 to 17.2; when the copper modulator is used, the error is further reduced to 21.9 HU and the CNR is further increased to 19.2.
Park, Y; Winey, B; Sharp, G
2014-06-01
Purpose: To demonstrate feasibility of proton dose calculation on scattercorrected CBCT images for the purpose of adaptive proton therapy. Methods: Two CBCT image sets were acquired from a prostate cancer patient and a thorax phantom using an on-board imaging system of an Elekta infinity linear accelerator. 2-D scatter maps were estimated using a previously introduced CT-based technique, and were subtracted from each raw projection image. A CBCT image set was then reconstructed with an open source reconstruction toolkit (RTK). Conversion from the CBCT number to HU was performed by soft tissue-based shifting with reference to the plan CT. Passively scattered proton plans were simulated on the plan CT and corrected/uncorrected CBCT images using the XiO treatment planning system. For quantitative evaluation, water equivalent path length (WEPL) was compared in those treatment plans. Results: The scatter correction method significantly improved image quality and HU accuracy in the prostate case where large scatter artifacts were obvious. However, the correction technique showed limited effects on the thorax case that was associated with fewer scatter artifacts. Mean absolute WEPL errors from the plans with the uncorrected and corrected images were 1.3 mm and 5.1 mm in the thorax case and 13.5 mm and 3.1 mm in the prostate case. The prostate plan dose distribution of the corrected image demonstrated better agreement with the reference one than that of the uncorrected image. Conclusion: A priori CT-based CBCT scatter correction can reduce the proton dose calculation error when large scatter artifacts are involved. If scatter artifacts are low, an uncorrected CBCT image is also promising for proton dose calculation when it is calibrated with the soft-tissue based shifting.
Three-Loop Corrections to the Soft Anomalous Dimension in Multileg Scattering
NASA Astrophysics Data System (ADS)
Almelid, Øyvind; Duhr, Claude; Gardi, Einan
2016-10-01
We present the three-loop result for the soft anomalous dimension governing long-distance singularities of multileg gauge-theory scattering amplitudes of massless partons. We compute all contributing webs involving semi-infinite Wilson lines at three loops and obtain the complete three-loop correction to the dipole formula. We find that nondipole corrections appear already for three colored partons, where the correction is a constant without kinematic dependence. Kinematic dependence appears only through conformally invariant cross ratios for four colored partons or more, and the result can be expressed in terms of single-valued harmonic polylogarithms of weight five. While the nondipole three-loop term does not vanish in two-particle collinear limits, its contribution to the splitting amplitude anomalous dimension reduces to a constant, and it depends only on the color charges of the collinear pair, thereby preserving strict collinear factorization properties. Finally, we verify that our result is consistent with expectations from the Regge limit.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Meyer, Michael; Kalender, Willi A; Kyriakou, Yiannis
2010-01-07
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the
Self-interaction correction in multiple scattering theory: application to transition metal oxides
Daene, Markus W; Lueders, Martin; Ernst, Arthur; Diemo, Koedderitzsch; Temmerman, Walter M; Szotek, Zdzislawa; Wolfam, Hergert
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare the CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.
Noncommutative correction to Aharonov-Bohm scattering: A field theory approach
Anacleto, M.A.; Gomes, M.; Silva, A.J. da; Spehler, D.
2004-10-15
We study a noncommutative nonrelativistic theory in 2+1 dimensions of a scalar field coupled to the Chern-Simons field. In the commutative situation this model has been used to simulate the Aharonov-Bohm effect in the field theory context. We verified that, contrary to the commutative result, the inclusion of a quartic self-interaction of the scalar field is not necessary to secure the ultraviolet renormalizability of the model. However, to obtain a smooth commutative limit the presence of a quartic gauge invariant self-interaction is required. For small noncommutativity we fix the corrections to the Aharonov-Bohm scattering and prove that up to one loop the model is free from dangerous infrared/ultraviolet divergences.
Gearhart, A; Peterson, T; Johnson, L
2015-06-15
Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMA phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter
NASA Astrophysics Data System (ADS)
Chen, J.; Zebker, H. A.; Knight, R. J.
2015-12-01
InSAR is commonly used to measure surface deformation between different radar passes at cm-scale accuracy and m-scale resolution. However, InSAR measurements are often decorrelated due to vegetation growth, which greatly limits high quality InSAR data coverage. Here we present an algorithm for retrieving InSAR deformation measurements over areas with significant vegetation decorrelation through the use of adaptive interpolation between persistent scatterer (PS) pixels, those points at which surface scattering properties do not change much over time and thus decorrelation artifacts are minimal. The interpolation filter restores phase continuity in space and greatly reduces errors in phase unwrapping. We apply this algorithm to process L-band ALOS interferograms acquired over the San Luis Valley, Colorado and the Tulare Basin, California. In both areas, groundwater extraction for irrigation results in land deformation that can be detected using InSAR. We show that the PS-based algorithm reduces the artifacts from vegetation decorrelation while preserving the deformation signature. The spatial sampling resolution achieved over agricultural fields is on the order of hundreds of meters, usually sufficient for groundwater studies. The improved InSAR data allow us further to reconstruct the SBAS ground deformation time series and transform the measured deformation to head levels using the skeletal storage coefficient and time delay constant inferred from a joint InSAR-well data analysis. The resulting InSAR-head and well-head measurements in the San Luis valley show good agreement with primary confined aquifer pumping activities. This case study demonstrates that high quality InSAR deformation data can be obtained over vegetation-decorrrelated region if processed correctly.
NASA Astrophysics Data System (ADS)
Khalil, Abdullah; Horowitz, W. A.
2017-01-01
We calculate the elastic scattering cross section for an electron off of a classical point source in weak-coupling perturbative quantum electrodynamics at next-to-leading order accuracy in the renormalization scheme. Since we use the \\overline {MS} renormalization scheme, our result is valid up to arbitrary large momentum transfers between the source and the scattered electron.
NASA Astrophysics Data System (ADS)
Wang, Chao; Xiao, Jun; Luo, Xiaobing
2016-10-01
The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.
NASA Astrophysics Data System (ADS)
Kim, Jihun; Park, Yang-Kyun; Sharp, Gregory; Busse, Paul; Winey, Brian
2017-01-01
Proton therapy has dosimetric advantages due to the well-defined range of the proton beam over photon radiotherapy. When the proton beams, however, are delivered to the patient in fractionated radiation treatment, the treatment outcome is affected by delivery uncertainties such as anatomic change in the patient and daily patient setup error. This study aims at establishing a method to evaluate the dosimetric impact of the anatomic change and patient setup error during head and neck proton therapy. Range variations due to the delivery uncertainties were assessed by calculating water equivalent path length (WEPL) to the distal edge of tumor volume using planning CT and weekly treatment cone-beam CT (CBCT) images. Specifically, mean difference and root mean squared deviation (RMSD) of the distal WEPLs were calculated as the weekly range variations. To accurately calculate the distal WEPLs, an existing CBCT scatter correction algorithm was used. An automatic rigid registration was used to align the planning CT and treatment CBCT images, simulating a six degree-of-freedom couch correction at treatments. The authors conclude that the dosimetric impact of the anatomic change and patient setup error was reasonably captured in the differences of the distal WEPL variation with a range calculation uncertainty of 2%. The proposed method to calculate the distal WEPL using the scatter-corrected CBCT images can be an essential tool to decide the necessity of re-planning in adaptive proton therapy.
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Darboux, Michel; Bordy, Thomas; Robert-Coutant, Christine; Gonon, Georges
2004-05-01
At CEA-LETI, a DEXA approach for systems using a digital 2D radiographic detector has been developed. It relies on an original X-rays scatter management method, based on a combined use of an analytical model and of scatter calibration data acquired through different thicknesses of Lucite slabs. Since Lucite X-rays interaction properties are equivalent to fat, the approach leads to a scatter flux map representative of a 100% fat region. However, patients" soft tissues are composed of lean and fat. Therefore, the obtained scatter map has to be refined in order to take into account the various fat ratios that can present patients. This refinement consists in establishing a formula relating the fat ratio to the thicknesses of Low and High Energy Lucite slabs leading to same signal level. This proportion is then used to compute, on the basis of X-rays/matter interaction equations, correction factors to apply to Lucite equivalent X-rays scatter map. Influence of fat ratio correction has been evaluated, on a digital 2D bone densitometer, with phantoms composed of a PVC step (simulating bone) and different Lucite/water thicknesses as well as on patients. The results show that our X-rays scatter determination approach can take into account variations of body composition.
Radiative corrections to the elastic e-p and mu-p scattering in Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Koshchii, Oleksandr; Afanasev, Andrei; MUSE Collaboration
2015-04-01
In this paper, we calculated exactly lepton mass corrections for the elastic e-p and mu-p scatterings using the ELRADGEN 2.1 Monte Carlo generator. These estimations are essential to be used in the MUSE experiment that is designed to solve the proton radius puzzle. This puzzle is due to the fact that two methods of measuring proton radius (the spectroscopy method, which measures proton energy levels in hydrogen, and the electron scattering experiment) predicted the radius to be 0.8768 +/-0.0069 fm, whereas the experiment that used muonic hydrogen provided the value that is 5% smaller. Since the radiative corrections are different for electrons and muons due to their mass difference, these corrections are extremely important for analysis and interpretation of upcoming MUSE data.
Kato, Haruhisa; Nakamura, Ayako; Takahashi, Kayori; Kinugasa, Shinichi
2012-01-01
Accurate determination of the intensity-average diameter of polystyrene latex (PS-latex) by dynamic light scattering (DLS) was carried out through extrapolation of both the concentration of PS-latex and the observed scattering angle. Intensity-average diameter and size distribution were reliably determined by asymmetric flow field flow fractionation (AFFFF) using multi-angle light scattering (MALS) with consideration of band broadening in AFFFF separation. The intensity-average diameter determined by DLS and AFFFF-MALS agreed well within the estimated uncertainties, although the size distribution of PS-latex determined by DLS was less reliable in comparison with that determined by AFFFF-MALS.
Modulator design for x-ray scatter correction using primary modulation: Material selection
Gao Hewei; Zhu Lei; Fahrig, Rebecca
2010-08-15
Purpose: An optimal material selection for primary modulator is proposed in order to minimize beam hardening of the modulator in x-ray cone-beam computed tomography (CBCT). Recently, a measurement-based scatter correction method using primary modulation has been developed and experimentally verified. In the practical implementation, beam hardening of the modulator blocker is a limiting factor because it causes inconsistency in the primary signal and therefore degrades the accuracy of scatter correction. Methods: This inconsistency can be purposely assigned to the effective transmission factor of the modulator whose variation as a function of object filtration represents the magnitude of beam hardening of the modulator. In this work, the authors show that the variation reaches a minimum when the K-edge of the modulator material is near the mean energy of the system spectrum. Accordingly, an optimal material selection can be carried out in three steps. First, estimate and evaluate the polychromatic spectrum for a given x-ray system including both source and detector; second, calculate the mean energy of the spectrum and decide the candidate materials whose K-edge energies are near the mean energy; third, select the optimal material from the candidates after considering both the magnitude of beam hardening and the physical and chemical properties. Results: A tabletop x-ray CBCT system operated at 120 kVp is used to validate the material selection method in both simulations and experiments, from which the optimal material for this x-ray system is then chosen. With the transmission factor initially being 0.905 and 0.818, simulations show that erbium provides the least amount of variation as a function of object filtrations (maximum variations are 2.2% and 4.3%, respectively, only one-third of that for copper). With different combinations of aluminum and copper filtrations (simulating a range of object thicknesses), measured overall variations are 2.5%, 1.0%, and 8
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained without scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba
Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies
NASA Astrophysics Data System (ADS)
Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing
2016-03-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.
Kohlmyer, S.G.; Mankoff, D.A.; Lewellen, T.K.; Kaplan, M.S.
1996-12-31
The increased sensitivity of 3D PET reduces image noise but can also result in a loss of contrast due to higher scatter fractions. Phantom studies were performed to compare tumor detectability in 2D and 3D qualitative whole body PET without scatter or attenuation correction. Lesion detectability was defined as: detectability = contrast/noise = (
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
Ramamurthy, S; Sechopoulos, I
2014-06-15
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-14
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-01-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here. PMID:25494779
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
NASA Astrophysics Data System (ADS)
Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.
2015-03-01
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
NASA Astrophysics Data System (ADS)
Mann, Steve D.; Tornai, Martin P.
2015-03-01
Solid state Cadmium Zinc Telluride (CZT) gamma cameras for SPECT imaging offer significantly improved energy resolution compared to traditional scintillation detectors. However, the photopeak resolution is often asymmetric due to incomplete charge collection within the detector, resulting in many photopeak events incorrectly sorted into lower energy bins ("tailing"). These misplaced events contaminate the true scatter signal, which may negatively impact scatter correction methods that rely on estimates of scatter from the spectra. Additionally, because CZT detectors are organized into arrays, each individual detector element may exhibit different degrees of tailing. Here, we present a modified dualenergy window scatter correction method for emission detection and imaging that attempts to account for positiondependent effects of incomplete charge collection in the CZT gamma camera of our dedicated breast SPECT-CT system. Point source measurements and geometric phantoms were used to estimate the impact of tailing on the scatter signal and extract a better estimate of the ratio of scatter within two energy windows. To evaluate the method, cylindrical phantoms with and without a separate fillable chamber were scanned to determine the impact on quantification in hot, cold, and uniform background regions. Projections were reconstructed using OSEM, and the results for the traditional and modified scatter correction methods were compared. Results show that while modest reduced quantification accuracy was observed in hot and cold regions of the multi-chamber phantoms, the modified scatter correction method yields up to 8% improved quantification accuracy with 4% less added noise than the traditional DEW method within uniform background regions.
Zhou Haiqing; Kao Chungwen; Yang Shinnan
2007-12-31
Leading electroweak corrections play an important role in precision measurements of the strange form factors. We calculate the two-photon-exchange (TPE) and {gamma}Z-exchange corrections to the parity-violating asymmetry of the elastic electron-proton scattering in a simple hadronic model including the finite size of the proton. We find both can reach a few percent and are comparable in size with the current experimental measurements of strange-quark effects in the proton neutral weak current. The effect of {gamma}Z exchange is in general larger than that of TPE, especially at low momentum transfer Q{sup 2}{<=}1 GeV{sup 2}. Their combined effects on the values of G{sub E}{sup s}+G{sub M}{sup s} extracted in recent experiments can be as large as -40% in certain kinematics.
2015-07-01
Lai Y-S, Biedermann P, Ekpo UF, et al. Spatial distribution of schistosomiasis and treatment needs in sub-Saharan Africa: a systematic review and geostatistical analysis. Lancet Infect Dis 2015; published online May 22. http://dx.doi.org/10.1016/S1473-3099(15)00066-3—Figure 1 of this Article should have contained a box stating ‘100 references added’ with an arrow pointing inwards, rather than a box stating ‘199 records excluded’, and an asterisk should have been added after ‘1473 records extracted into GNTD’. Additionally, the positioning of the ‘§ and ‘†’ footnotes has been corrected in table 1. These corrections have been made to the online version as of June 4, 2015.
2016-02-01
In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Notaros, B. M.; Bringi, V. N.; Kleinkort, C.; Huang, G. J.; Kennedy, P.; Thurai, M.
2015-12-01
We present a novel approach to remote sensing and characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced in-situ instrumentation for microphysical and geometrical measurements of ice and snow particles, image processing methodology to reconstruct complex particle three-dimensional (3D) shapes, computational electromagnetics to analyze realistic precipitation scattering, and state-of-the-art polarimetric radar. Our in-situ measurement site at the Easton Valley View Airport, La Salle, Colorado, shown in the figure, consists of two advanced optical imaging disdrometers within a 2/3-scaled double fence intercomparison reference wind shield, and also includes PLUVIO snow measuring gauge, VAISALA weather station, and collocated NCAR GPS advanced upper-air system sounding system. Our primary radar is the CSU-CHILL radar, with a dual-offset Gregorian antenna featuring very high polarization purity and excellent side-lobe performance in any plane, and the in-situ instrumentation site being very conveniently located at a range of 12.92 km from the radar. A multi-angle snowflake camera (MASC) is used to capture multiple different high-resolution views of an ice particle in free-fall, along with its fall speed. We apply a visual hull geometrical method for reconstruction of 3D shapes of particles based on the images collected by the MASC, and convert these shapes into models for computational electromagnetic scattering analysis, using a higher order method of moments. A two-dimensional video disdrometer (2DVD), collocated with the MASC, provides 2D contours of a hydrometeor, along with the fall speed and other important parameters. We use the fall speed from the MASC and the 2DVD, along with state parameters measured at the Easton site, to estimate the particle mass (Böhm's method), and then the dielectric constant of particles, based on a Maxwell-Garnet formula. By calculation of the "particle-by-particle" scattering
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Lee, Hoyeon; Cho, Seungryong
2016-03-01
Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.
Wennberg, Christian L; Murtola, Teemu; Páll, Szilárd; Abraham, Mark J; Hess, Berk; Lindahl, Erik
2015-12-08
Long-range lattice summation techniques such as the particle-mesh Ewald (PME) algorithm for electrostatics have been revolutionary to the precision and accuracy of molecular simulations in general. Despite the performance penalty associated with lattice summation electrostatics, few biomolecular simulations today are performed without it. There are increasingly strong arguments for moving in the same direction for Lennard-Jones (LJ) interactions, and by using geometric approximations of the combination rules in reciprocal space, we have been able to make a very high-performance implementation available in GROMACS. Here, we present a new way to correct for these approximations to achieve exact treatment of Lorentz-Berthelot combination rules within the cutoff, and only a very small approximation error remains outside the cutoff (a part that would be completely ignored without LJ-PME). This not only improves accuracy by almost an order of magnitude but also achieves absolute biomolecular simulation performance that is an order of magnitude faster than any other available lattice summation technique for LJ interactions. The implementation includes both CPU and GPU acceleration, and its combination with improved scaling LJ-PME simulations now provides performance close to the truncated potential methods in GROMACS but with much higher accuracy.
Ouyang, Luo; Lee, Huichen Pam; Wang, Jing
2015-01-01
Purpose To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., "blocker") consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV-MV scatter correction. Results Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield unites (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV-MV scatter induced HU inaccuracy and cupping artifacts. PMID:26026484
NASA Astrophysics Data System (ADS)
1998-12-01
Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.
Szidarovszky, Tamás; Császár, Attila G.
2015-01-07
The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2015-05-01
There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.
NASA Astrophysics Data System (ADS)
Roberts, Benjamin; Dzuba, Vladimir; Flambaum, Victor; Gribakin, Gleb; Pospelov, Maxim; Stadnik, Yevgeny
2017-01-01
Atoms can become ionised during the scattering of a slow, heavy particle off a bound electron. Such an interaction involving leptophilic WIMP dark matter is a potential explanation for the anomalous 9 sigma annual modulation in the DAMA direct detection experiment. We show that due to non-analytic, cusp-like behavior of Coulomb functions close to the nucleus leads to an effective atomic structure enhancement. Crucially, we also show that electron relativistic effects are important. With this in mind, we perform high-accuracy relativistic calculations of atomic ionisation. We scan the parameter space: the DM mass, the mediator mass, and the effective coupling strength, to determine if there is any region that could potentially explain the DAMA signal. While we find that the modulation fraction of all events with energy deposition above 2 keV in NaI can be quite significant, reaching 50%, the relevant parts of the parameter space are excluded by the XENON10 and XENON100 experiments.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
NASA Technical Reports Server (NTRS)
Au, C. K.
1988-01-01
In the effective-potential description for low-energy scattering involving a spinless complex (a body with internal structure), the nonadiabatic corrections are sometimes disguised in momentum-dependent terms. These are distinct from energy-dependent corrections. A general procedure is given here by which all the momentum-dependent corrections can be converted into nonadiabatic corrections in truly local form. Circumstances under which an expansion of the effective potential, in terms of the adiabatic term plus nonadiabatic and energy-dependent corrections is allowed and forbidden, are discussed. An example for the latter is in the case of near degeneracy in the spectrum of the complex or in the extrapolation of the effective potential to short-distance behavior. This indicates that certain claims of 'saturation effect' at short distances in low-energy electron-atom scattering are invalid.
Kress, J.D.
1988-07-01
Two distinct areas within theoretical chemical physics are investigated in this dissertation. First, the dynamics of collinear exchange reactions is treated within a semiclassical Gaussian wavepacket (GWP) description. Second, a corrected effective medium (CEM) theory is derived which yields: a one-active-body description of the binding energy between an atom and an inhomogeneous host; and an N-active-body description of the interaction energy for an N atom system. To properly treat the dynamics of collinear exchange reactions, two extensions to the previous methodology of GWP dynamics are presented: evaluation of the interaction picture wavefunction propagators directly via the GWP solution to the time-dependent Schrodinger equation; and use of an expansion of GWPs to represent the initial translational plane wave. This extended GWP dynamical approach is applied to the H + H/sub 2/ collinear exchange reaction using the Porter-Karplus II potential energy surface.
Wen, Han; Miao, Houxun; Bennett, Eric E; Adamo, Nick M; Chen, Lei
2013-01-01
The development of phase contrast methods for diagnostic x-ray imaging is inspired by the potential of seeing the internal structures of the human body without the need to deposit any harmful radiation. An efficient class of x-ray phase contrast imaging and scatter correction methods share the idea of using structured illumination in the form of a periodic fringe pattern created with gratings or grids. They measure the scatter and distortion of the x-ray wavefront through the attenuation and deformation of the fringe pattern via a phase stepping process. Phase stepping describes image acquisition at regular phase intervals by shifting a grating in uniform steps. However, in practical conditions the actual phase intervals can vary from step to step and also spatially. Particularly with the advent of electromagnetic phase stepping without physical movement of a grating, the phase intervals are dependent upon the focal plane of interest. We describe a demodulation algorithm for phase stepping at arbitrary and position-dependent (APD) phase intervals without assuming a priori knowledge of the phase steps. The algorithm retrospectively determines the spatial distribution of the phase intervals by a Fourier transform method. With this ability, grating-based x-ray imaging becomes more adaptable and robust for broader applications.
Wen, Han; Miao, Houxun; Bennett, Eric E.; Adamo, Nick M.; Chen, Lei
2013-01-01
The development of phase contrast methods for diagnostic x-ray imaging is inspired by the potential of seeing the internal structures of the human body without the need to deposit any harmful radiation. An efficient class of x-ray phase contrast imaging and scatter correction methods share the idea of using structured illumination in the form of a periodic fringe pattern created with gratings or grids. They measure the scatter and distortion of the x-ray wavefront through the attenuation and deformation of the fringe pattern via a phase stepping process. Phase stepping describes image acquisition at regular phase intervals by shifting a grating in uniform steps. However, in practical conditions the actual phase intervals can vary from step to step and also spatially. Particularly with the advent of electromagnetic phase stepping without physical movement of a grating, the phase intervals are dependent upon the focal plane of interest. We describe a demodulation algorithm for phase stepping at arbitrary and position-dependent (APD) phase intervals without assuming a priori knowledge of the phase steps. The algorithm retrospectively determines the spatial distribution of the phase intervals by a Fourier transform method. With this ability, grating-based x-ray imaging becomes more adaptable and robust for broader applications. PMID:24205177
NASA Astrophysics Data System (ADS)
Wuhrer, R.; Moran, K.
2014-03-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.
NASA Astrophysics Data System (ADS)
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad
2013-10-01
Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in (99m)Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
NASA Astrophysics Data System (ADS)
Kim, J. H.; Kim, S. W.; Yoon, S. C.; Park, R.; Ogren, J. A.
2014-12-01
Filter-based instrument, such as aethalometer, is being widely used to measure equivalent black carbon(EBC) mass concentration and aerosol absorption coefficient(AAC). However, many other previous studies have poited that AAC and its aerosol absorption angstrom exponent(AAE) are strongly affected by the multi-scattering correction factor(C) when we retrieve AAC from aethalometer EBC mass concentration measurement(Weingartner et al., 2003; Arnott et al., 2005; Schmid et al., 2006; Coen et al., 2010). We determined the C value using the method given in Weingartner et al. (2003) by comparing 7-wavelngth aethalometer (AE-31, Magee sci.) to 3-wavelength Photo-Acoustic Soot Spectrometer (PASS-3, DMT) at Gosan climate observatory, Korea(GCO) during Cheju ABC plume-asian monsoon experiment(CAPMEX) campaign(August and September, 2008). In this study, C was estimated to be 4.04 ± 1.68 at 532 nm and AAC retrieved with this value was decreased as approximately 100% as than that retrieved with soot case value from Weingartner et al (2003). We compared the AAC determined from aethalomter measurements to that from collocated Continuous Light Absorption Photometer (CLAP) measurements from January 2012 to December 2013 at GCO and found good agreement in both AAC and AAE. This result suggests the determination of site-specific C is crucially needed when we calculate AAC from aethalometer measurements.
Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C
2014-06-15
Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively.
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2017-03-10
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of
Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A
2013-01-01
Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.
Hua, Dengxin; Uchida, Masaru; Kobayashi, Takao
2005-03-01
A Rayleigh-Mie-scattering lidar system at an eye-safe 355-nm ultraviolet wavelength that is based on a high-spectral-resolution lidar technique is demonstrated for measuring the vertical temperature profile of the troposphere. Two Rayleigh signals, which determine the atmospheric temperature, are filtered with two Fabry-Perot etalon filters. The filters are located on the same side of the wings of the Rayleigh-scattering spectrum and are optically constructed with a dual-pass optical layout. This configuration achieves a high rejection rate for Mie scattering and reasonable transmission for Rayleigh scattering. The Mie signal is detected with a third Fabry-Perot etalon filter, which is centered at the laser frequency. The filter parameters were optimized by numerical calculation; the results showed a Mie rejection of approximately -45 dB, and Rayleigh transmittance greater than 1% could be achieved for the two Rayleigh channels. A Mie correction method is demonstrated that uses an independent measure of the aerosol scattering to correct the temperature measurements that have been influenced by the aerosols and clouds. Simulations and preliminary experiments have demonstrated that the performance of the dual-pass etalon and Mie correction method is highly effective in practical applications. Simulation results have shown that the temperature errors that are due to noise are less than 1 K up to a height of 4 km for daytime measurement for 300 W m(-2) sr(-1) microm(-1) sky brightness with a lidar system that uses 200 mJ of laser energy, a 3.5-min integration time, and a 25-cm telescope.
NASA Astrophysics Data System (ADS)
Pham, A. T.; Nguyen, C. D.; Jungemann, C.; Meinerzhagen, B.
2006-04-01
A new semiempirical surface scattering model for electrons in strained Si devices including a quantum correction has been developed and implemented into our FBMC simulator. The strain is assumed to be consistent with pseudomorphic growth on a relaxed SiGe buffer. By introducing a few additional terms into the physical scattering rates which depend on the Ge-content in the SiGe buffer, the new surface scattering model can excellently reproduce low-field inversion layer mobility measurements for a wide range of Ge-content (0-30%) and substrate doping levels (10 16-5.5 × 10 18 cm -3). As a device example, an NMOSFET with 23 nm gate length with and without a strained Si channel has been simulated by the new FBMC model.
Dawn, Sandipan; Bakshi, A K; Sathian, Deepa; Selvam, T Palani
2016-10-07
Neutron scatter contributions as a function of distance along the transverse axis of (241)Am-Be source were estimated by three different methods such as shadow cone, semi-empirical and Monte Carlo. The Monte Carlo-based FLUKA code was used to simulate the existing room used for the calibration of CR-39 detector as well as LB6411 doseratemeter for selected distances from (241)Am-Be source. The modified (241)Am-Be spectra at different irradiation geometries such as at different source detector distances, behind the shadow cone, at the surface of the water phantom were also evaluated using Monte Carlo calculations. Neutron scatter contributions, estimated using three different methods compare reasonably well. It is proposed to use the scattering correction factors estimated through Monte Carlo simulation and other methods for the calibration of CR-39 detector and doseratemeter at 0.75 and 1 m distance from the source.
NASA Astrophysics Data System (ADS)
Holka, Filip; Szalay, Péter G.; Fremont, Julien; Rey, Michael; Peterson, Kirk A.; Tyuterev, Vladimir G.
2011-03-01
High level ab initio potential energy functions have been constructed for LiH in order to predict vibrational levels up to dissociation. After careful tests of the parameters of the calculation, the final adiabatic potential energy function has been composed from: (a) an ab initio nonrelativistic potential obtained at the multireference configuration interaction with singles and doubles level including a size-extensivity correction and quintuple-sextuple ζ extrapolations of the basis, (b) a mass-velocity-Darwin relativistic correction, and (c) a diagonal Born-Oppenheimer (BO) correction. Finally, nonadiabatic effects have also been considered by including a nonadiabatic correction to the kinetic energy operator of the nuclei. This correction is calculated from nonadiabatic matrix elements between the ground and excited electronic states. The calculated vibrational levels have been compared with those obtained from the experimental data [J. A. Coxon and C. S. Dickinson, J. Chem. Phys. 134, 9378 (2004)]. It was found that the calculated BO potential results in vibrational levels which have root mean square (rms) deviations of about 6-7 cm-1 for LiH and ˜3 cm-1 for LiD. With all the above mentioned corrections accounted for, the rms deviation falls down to ˜1 cm-1. These results represent a drastic improvement over previous theoretical predictions of vibrational levels for all isotopologues of LiH.
Holka, Filip; Szalay, Péter G; Fremont, Julien; Rey, Michael; Peterson, Kirk A; Tyuterev, Vladimir G
2011-03-07
High level ab initio potential energy functions have been constructed for LiH in order to predict vibrational levels up to dissociation. After careful tests of the parameters of the calculation, the final adiabatic potential energy function has been composed from: (a) an ab initio nonrelativistic potential obtained at the multireference configuration interaction with singles and doubles level including a size-extensivity correction and quintuple-sextuple ζ extrapolations of the basis, (b) a mass-velocity-Darwin relativistic correction, and (c) a diagonal Born-Oppenheimer (BO) correction. Finally, nonadiabatic effects have also been considered by including a nonadiabatic correction to the kinetic energy operator of the nuclei. This correction is calculated from nonadiabatic matrix elements between the ground and excited electronic states. The calculated vibrational levels have been compared with those obtained from the experimental data [J. A. Coxon and C. S. Dickinson, J. Chem. Phys. 134, 9378 (2004)]. It was found that the calculated BO potential results in vibrational levels which have root mean square (rms) deviations of about 6-7 cm(-1) for LiH and ∼3 cm(-1) for LiD. With all the above mentioned corrections accounted for, the rms deviation falls down to ∼1 cm(-1). These results represent a drastic improvement over previous theoretical predictions of vibrational levels for all isotopologues of LiH.
Larsson, Anne; Johansson, Lennart
2003-11-21
In single photon emission computed tomography (SPECT), transmission-dependent convolution subtraction has been shown to be useful when correcting for scattered events. The method is based on convolution subtraction, but includes a matrix of scatter fractions instead of a global scatter fraction. The method can be extended to iteratively improve the scatter estimate, but in this note we show that this requires a modification of the theory to use scatter-to-total scatter fractions for the first iteration only and scatter-to-primary fractions thereafter. To demonstrate this, scatter correction is performed on a Monte Carlo simulated image of a point source of activity in water. The modification of the theory is compared to corrections where the scatter fractions are based on the scatter-to-total ratio, using one and ten iterations. The resulting ratios of subtracted to original counts are compared to the true scatter-to-total ratio of the simulation and the most accurate result is found for our modification of the theory.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
Graudenz, D. )
1994-04-01
Jet cross sections in deeply inelastic scattering in the case of transverse photon exchange for the production of (1+1) and (2+1) jets are calculated in next-to-leading-order QCD (here the +1'' stands for the target remnant jet, which is included in the jet definition). The jet definition scheme is based on a modified JADE cluster algorithm. The calculation of the (2+1) jet cross section is described in detail. Results for the virtual corrections as well as for the real initial- and final-state corrections are given explicitly. Numerical results are stated for jet cross sections as well as for the ratio [sigma][sub (2+1) jet]/[sigma][sub tot] that can be expected at E665 and DESY HERA. Furthermore the scale ambiguity of the calculated jet cross sections is studied and different parton density parametrizations are compared.
2008-08-01
format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables , and charts are in... empirical correction factor β for the effects of cold expansion in 2024-T351 aluminum alloy. This method takes into account the interaction of the...thank you for your smiles. TABLE OF CONTENTS ABSTRACT
Frolov, Alexei M; Wardlaw, David M
2014-09-14
Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1(1)S-states of some light two-electron Li(+), Be(2+), B(3+), and C(4+) ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.
NASA Astrophysics Data System (ADS)
Holstensson, M.; Erlandsson, K.; Poludniowski, G.; Ben-Haim, S.; Hutton, B. F.
2015-04-01
An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of 99mTc and 123I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of 99mTc and 123I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both 99mTc and 123I, results show that the estimated spatial distribution of events from 99mTc in the 99mTc photopeak energy window is very similar to that measured in a single 99mTc phantom study. The extracted images of primary events display increased cold lesion contrasts for both 99mTc and 123I.
Holstensson, M; Erlandsson, K; Poludniowski, G; Ben-Haim, S; Hutton, B F
2015-04-21
An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of (99m)Tc and (123)I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of (99m)Tc and (123)I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both (99m)Tc and (123)I, results show that the estimated spatial distribution of events from (99m)Tc in the (99m)Tc photopeak energy window is very similar to that measured in a single (99m)Tc phantom study. The extracted images of primary events display increased cold lesion contrasts for both (99m)Tc and (123)I.
NASA Astrophysics Data System (ADS)
Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling
2016-10-01
In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\
Frolov, Alexei M.; Wardlaw, David M.
2014-09-14
Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.
Schowalter, M; Müller, K; Rosenauer, A
2012-01-01
Modified atomic scattering amplitudes (MASAs), taking into account the redistribution of charge due to bonds, and the respective correction factors considering the effect of static atomic displacements were computed for the chemically sensitive 002 reflection for ternary III-V and II-VI semiconductors. MASAs were derived from computations within the density functional theory formalism. Binary eight-atom unit cells were strained according to each strain state s (thin, intermediate, thick and fully relaxed electron microscopic specimen) and each concentration (x = 0, …, 1 in 0.01 steps), where the lattice parameters for composition x in strain state s were calculated using continuum elasticity theory. The concentration dependence was derived by computing MASAs for each of these binary cells. Correction factors for static atomic displacements were computed from relaxed atom positions by generating 50 × 50 × 50 supercells using the lattice parameter of the eight-atom unit cells. Atoms were randomly distributed according to the required composition. Polynomials were fitted to the composition dependence of the MASAs and the correction factors for the different strain states. Fit parameters are given in the paper.
NASA Astrophysics Data System (ADS)
Gorchtein, Mikhail
2014-11-01
Two-photon-exchange (TPE) contributions to elastic electron-proton scattering in the forward regime in leading logarithmic ˜t ln|t | approximation in the momentum transfer t are considered. The imaginary part of the TPE amplitude in the forward kinematics is related to the total photoabsorption cross section. The real part of the TPE amplitude is obtained from an unsubtracted fixed-t dispersion relation. This allows a clean prediction of the real part of the TPE amplitude at forward angles with the leading term ˜t ln|t | . Numerical estimates are comparable with or exceed the experimental precision in extracting the charge radius from the experimental data.
Particle Diffusion Due to Coulomb Scattering
V. Lebedev and S. Nagaitsev
2002-06-03
Conventionally, the multiple and single particle scattering in a storage ring are considered to be independent. Such an approach is simple and often yields sufficiently accurate results. Nevertheless, there is a class of problems where such an approach is not adequate and the single and multiple scattering need to be considered together. This can be achieved by solving an integro-differential equation for the particle distribution function, which correctly treats particle Coulomb scattering in the presence of betatron motion. A derivation of the equation is presented in the article. A numerical solution for one practical case is also considered.
Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )
1991-11-15
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.
Sakaridis, Ioannis; Ganopoulos, Ioannis; Argiriou, Anagnostis; Tsaftaris, Athanasios
2013-05-01
The substitution of high priced meat with low cost ones and the fraudulent labeling of meat products make the identification and traceability of meat species and their processed products in the food chain important. A polymerase chain reaction followed by a High Resolution Melting (HRM) analysis was developed for species specific detection of buffalo; it was applied in six commercial meat products. A pair of specific 12S and universal 18S rRNA primers were employed and yielded DNA fragments of 220bp and 77bp, respectively. All tested products were found to contain buffalo meat and presented melting curves with at least two visible inflection points derived from the amplicons of the 12S specific and 18S universal primers. The presence of buffalo meat in meat products and the adulteration of buffalo products with unknown species were established down to a level of 0.1%. HRM was proven to be a fast and accurate technique for authentication testing of meat products.
NASA Astrophysics Data System (ADS)
Larsson, Anne; Johansson, Lennart
2003-11-01
In single photon emission computed tomography (SPECT), transmission-dependent convolution subtraction has been shown to be useful when correcting for scattered events. The method is based on convolution subtraction, but includes a matrix of scatter fractions instead of a global scatter fraction. The method can be extended to iteratively improve the scatter estimate, but in this note we show that this requires a modification of the theory to use scatter-to-total scatter fractions for the first iteration only and scatter-to-primary fractions thereafter. To demonstrate this, scatter correction is performed on a Monte Carlo simulated image of a point source of activity in water. The modification of the theory is compared to corrections where the scatter fractions are based on the scatter-to-total ratio, using one and ten iterations. The resulting ratios of subtracted to original counts are compared to the true scatter-to-total ratio of the simulation and the most accurate result is found for our modification of the theory.
Two-loop master integrals for the mixed EW-QCD virtual corrections to Drell-Yan scattering
NASA Astrophysics Data System (ADS)
Bonciani, Roberto; Di Vita, Stefano; Mastrolia, Pierpaolo; Schubert, Ulrich
2016-09-01
We present the calculation of the master integrals needed for the two-loop QCD × EW corrections to q+overline{q}to {l}-+{l}+ and q+overline{q}^'to {l}-+overline{ν} , for massless external particles. We treat the W and Z bosons as degenerate in mass. We identify three types of diagrams, according to the presence of massive internal lines: the no-mass type, the one-mass type, and the two-mass type, where all massive propagators, when occurring, contain the same mass value. We find a basis of 49 master integrals and evaluate them with the method of the differential equations. The Magnus exponential is employed to choose a set of master integrals that obeys a canonical system of differential equations. Boundary conditions are found either by matching the solutions onto simpler integrals in special kinematic configurations, or by requiring the regularity of the solution at pseudothresholds. The canonical master integrals are finally given as Taylor series around d = 4 space-time dimensions, up to order four, with coefficients given in terms of iterated integrals, respectively up to weight four.
Scattered radiation in flat-detector based cone-beam CT: analysis of voxelized patient simulations
NASA Astrophysics Data System (ADS)
Wiegert, Jens; Bertram, Matthias
2006-03-01
This paper presents a systematic assessment of scattered radiation in flat-detector based cone-beam CT. The analysis is based on simulated scatter projections of voxelized CT images of different body regions allowing to accurately quantify scattered radiation of realistic and clinically relevant patient geometries. Using analytically computed primary projection data of high spatial resolution in combination with Monte-Carlo simulated scattered radiation, practically noise-free reference data sets are computed with and without inclusion of scatter. The impact of scatter is studied both in the projection data and in the reconstructed volume for the head, thorax, and pelvis regions. Currently available anti-scatter grid geometries do not sufficiently compensate scatter induced cupping and streak artifacts, requiring additional software-based scatter correction. The required accuracy of scatter compensation approaches increases with increasing patient size.
Yin Lingshu; Shcherbinin, Sergey; Celler, Anna
2010-10-01
Purpose: To assess the impact of attenuation and scatter corrections on the calculation of single photon emission computed tomography (SPECT)-weighted mean dose (SWMD) and functional volume segmentation as applied to radiation therapy treatment planning for lung cancer. Methods and Materials: Nine patients with lung cancer underwent a SPECT lung perfusion scan. For each scan, four image sets were reconstructed using the ordered subsets expectation maximization method with attenuation and scatter corrections ranging from none to a most comprehensive combination of attenuation corrections and direct scatter modeling. Functional volumes were segmented in each reconstructed image using 10%, 20%, ..., 90% of maximum SPECT intensity as a threshold. Systematic effects of SPECT reconstruction methods on treatment planning using functional volume were studied by calculating size and spatial agreements of functional volumes, and V{sub 20} for functional volume from actual treatment plans. The SWMD was calculated for radiation beams with a variety of possible gantry angles and field sizes. Results: Functional volume segmentation is sensitive to the particular method of SPECT reconstruction used. Large variations in functional volumes, as high as >50%, were observed in SPECT images reconstructed with different attenuation/scatter corrections. However, SWMD was less sensitive to the type of scatter corrections. SWMD was consistent within 2% for all reconstructions as long as computed tomography-based attenuation correction was used. Conclusion: When using perfusion SPECT images during treatment planning optimization/evaluation, the SWMD may be the preferred figure of merit, as it is less affected by reconstruction technique, compared with threshold-based functional volume segmentation.
Hayashi, Hisashi; Hiraoka, Nozomu
2015-04-30
Using a third-generation synchrotron source (the BL12XU beamline at SPring-8), inelastic X-ray scattering (IXS) spectra of liquid water and liquid benzene were measured at energy losses of 1-100 eV with 0.24 eV resolution for small momentum transfers (q) of 0.23 and 0.32 au with ±0.06 au uncertainty for q. For both liquids, the IXS profiles at these values of q converged well after we corrected for multiple scattering, and these results confirmed the dipole approximation for q ≤ ∼0.3 au. Several dielectric and optical functions [including the optical oscillator strength distribution (OOS), the optical energy-loss function (OLF), the complex dielectric function, the complex index of refraction, and the reflectance] in the vacuum ultraviolet region were derived and tabulated from these small-angle (small q) IXS spectra. These new data were compared with previously obtained results, and this comparison demonstrated the strong reproducibility and accuracy of IXS spectroscopy. For both water and benzene, there was a notable similarity between the OOSs of the liquids and amorphous solids, and there was no evidence of plasmon excitation in the OLF. The static structure factor [S(q)] for q ≤ ∼0.3 au was also deduced and suggests that molecular models that include electron correlation effects can serve as a good approximation for the liquid S(q) values over the full range of q.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.
NASA Astrophysics Data System (ADS)
Balaguer Ríos, D.; Aulenbacher, K.; Baunack, S.; Diefenbach, J.; Gläser, B.; von Harrach, D.; Imai, Y.; Kabuß, E.-M.; Kothe, R.; Lee, J. H.; Merkel, H.; Mora Espí, M. C.; Müller, U.; Schilling, E.; Weinrich, C.; Capozza, L.; Maas, F. E.; Arvieux, J.; El-Yakoubi, M. A.; Frascaria, R.; Kunne, R. A.; Ong, S.; van de Wiele, J.; Kowalski, S.; Prok, Y.
2016-09-01
A new measurement of the parity-violating asymmetry in the electron-deuteron quasielastic scattering for backward angles at ⟨Q2⟩ =0.224 (GeV/c ) 2 , obtained in the A4 experiment at the Mainz Microtron accelerator (MAMI) facility, is presented. The measured asymmetry is APV d=(-20.11 ±0.8 7stat±1.0 3sys)×10-6. A combination of these data with the proton measurements of the parity-violating asymmetry in the A4 experiment yields a value for the effective isovector axial-vector form factor of GAe ,(T =1 )=-0.19 ±0.43 and RA(T =1 ),anap=-0.41 ±0.35 for the anapole radiative correction. When combined with a reanalysis of measurements obtained in the G0 experiment at the Thomas Jefferson National Accelerator Facility, the uncertainties are further reduced to GMs=0.17 ±0.11 for the magnetic strange form factors, and RA(T =1 ),anap=-0.54 ±0.26 .
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during the CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American
Osbahr, Inga; Krause, Joachim; Bachmann, Kai; Gutzmer, Jens
2015-10-01
Identification and accurate characterization of platinum-group minerals (PGMs) is usually a very cumbersome procedure due to their small grain size (typically below 10 µm) and inconspicuous appearance under reflected light. A novel strategy for finding PGMs and quantifying their composition was developed. It combines a mineral liberation analyzer (MLA), a point logging system, and electron probe microanalysis (EPMA). As a first step, the PGMs are identified using the MLA. Grains identified as PGMs are then marked and coordinates recorded and transferred to the EPMA. Case studies illustrate that the combination of MLA, point logging, and EPMA results in the identification of a significantly higher number of PGM grains than reflected light microscopy. Analysis of PGMs by EPMA requires considerable effort due to the often significant overlaps between the X-ray spectra of almost all platinum-group and associated elements. X-ray lines suitable for quantitative analysis need to be carefully selected. As peak overlaps cannot be avoided completely, an offline overlap correction based on weight proportions has been developed. Results obtained with the procedure proposed in this study attain acceptable totals and atomic proportions, indicating that the applied corrections are appropriate.
NASA Astrophysics Data System (ADS)
Zhang, T.; Zhou, L.; Tong, S.
2015-12-01
The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
Complete Monte Carlo Simulation of Neutron Scattering Experiments
NASA Astrophysics Data System (ADS)
Drosg, M.
2011-12-01
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of 3He(n,n)3He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the
Quantum Corrections to the Conductivity in Disordered Conductors
NASA Astrophysics Data System (ADS)
Sahnoune, Abdelhadi
Quantum corrections to the conductivity have been studied at low temperatures down to 0.15K and fields up to 8.8T in two different disordered systems, namely amorphous Ca-Al alloys doped with Ag and Au and icosahedral Al-Cu -Fe alloys. In the former the influence of spin-orbit scattering on the enhanced electron-electron contribution to the resistivity has been, for the first time, clearly displayed. As the spin-orbit scattering rate increases, this contribution decreases rapidly to finally vanish at extremely high spin -orbit scattering rates. Furthermore the analysis shows that the current weak localization theory gives an accurate description of the experiments irrespective of the level of spin-orbit scattering. In icosahedral Al-Cu-Fe alloys, detailed study of the low temperature resistivity shows that the magnetoresistance and the temperature dependence of the resistivity data are consistent with the predictions of quantum corrections to the conductivity theories. The success of these theories in this alloy system is attributed to intense electron scattering due to disorder. The spin-orbit scattering and the electron wave-function dephasing rates are extracted from fitting the magnetoresistance. The dephasing rate is found to vary as AT^{p} with p~1.5; a characteristic of electron-electron scattering in the strong disorder limit. An antilocalization effect has also been directly observed in the temperature dependence of the resistivity in one of the samples.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
Multiple Scattering Theory of XAFS
NASA Astrophysics Data System (ADS)
Zabinsky, Steven Ira
A multiple scattering theory of XAFS for arbitrary materials with convergence to full multiple scattering calculations and to experiment is presented. It is shown that the multiple scattering expansion converges with a small number of paths. The theory is embodied in an efficient automated computer code that provides accurate theoretical multiple scattering standards for use in experimental analysis. The basis of this work is a new path enumeration and filtering algorithm. Paths are constructed for an arbitrary cluster in order of increasing path length. Filters based on the relative importance of the paths in the plane wave approximation and on the random phase approximation limit the number of paths so that all important paths with effective path length up to the mean free path length (between 10 and 20 A) can be considered. Quantitative expressions for path proliferation and relative path importance are presented. The calculations are compared with full multiple scattering calculations for Cu and Al. In the case of fcc Cu, the path filters reduce the number of paths from 60 billion to only 56 paths in a cluster of radius 12.5 A. These 56 paths are sufficient to converge the calculation to within the uncertainty inherent in the band structure calculation. Based on an analysis of these paths, a new hypothesis is presented for consideration: Single scattering, double scattering, and all orders of scattering that involve only forward or back scattering are sufficient to describe XAFS. Comparison with experiment in Cu, Pt and Ti demonstrate the accuracy of the calculation through the fourth shell. The correlated Debye model is used to determine Debye-Waller factors--the strengths and weaknesses of this approach are discussed. Preliminary results for calculations of the x -ray absorption near edge structure (XANES) have been done. The calculations compare well with Cu, Pt and Ti experiments. The white line in the Pt absorption edge is calculated correctly. There are
Some atmospheric scattering considerations relevant to BATSE: A model calculation
NASA Technical Reports Server (NTRS)
Young, John H.
1986-01-01
The orbiting Burst and Transient Source Experiement (BATSE) will locate gamma ray burst sources by analysis of the relative numbers of photons coming directly from a source and entering its prescribed array of detectors. In order to accurately locate burst sources it is thus necessary to identify and correct for any counts contributed by events other than direct entry by a mainstream photon. An effort is described which estimates the photon numbers which might be scattered into the BATSE detectors from interactions with the Earth atmosphere. A model was developed which yielded analytical expressions for single-scatter photon contributions in terms of source and satellite locations.
Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya
2017-01-01
We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496
NASA Astrophysics Data System (ADS)
Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya
2017-03-01
We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample.
Trinquier, Anne; Touboul, Mathieu; Walker, Richard J
2016-02-02
Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data
Petrongolo, Michael; Niu, Tianye; Zhu, Lei
2013-01-01
Excessive imaging dose from repeated scans and poor image quality mainly due to scatter contamination are the two bottlenecks of cone-beam CT (CBCT) imaging. Compressed sensing (CS) reconstruction algorithms show promises in recovering faithful signals from low-dose projection data but do not serve well the needs of accurate CBCT imaging if effective scatter correction is not in place. Scatter can be accurately measured and removed using measurement-based methods. However, these approaches are considered unpractical in the conventional FDK reconstruction, due to the inevitable primary loss for scatter measurement. We combine measurement-based scatter correction and CS-based iterative reconstruction to generate scatter-free images from low-dose projections. We distribute blocked areas on the detector where primary signals are considered redundant in a full scan. Scatter distribution is estimated by interpolating/extrapolating measured scatter samples inside blocked areas. CS-based iterative reconstruction is finally carried out on the undersampled data to obtain scatter-free and low-dose CBCT images. With only 25% of conventional full-scan dose, our method reduces the average CT number error from 250 HU to 24 HU and increases the contrast by a factor of 2.1 on Catphan 600 phantom. On an anthropomorphic head phantom, the average CT number error is reduced from 224 HU to 10 HU in the central uniform area. PMID:24348742
Refining atmospheric correction for aquatic remote spectroscopy
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.
2015-12-01
Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Estimation of scattered radiation in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Diaz, O.; Dance, D. R.; Young, K. C.; Elangovan, P.; Bakic, P. R.; Wells, K.
2014-08-01
Digital breast tomosynthesis (DBT) is a promising technique to overcome the tissue superposition limitations found in planar 2D x-ray mammography. However, as most DBT systems do not employ an anti-scatter grid, the levels of scattered radiation recorded within the image receptor are significantly higher than that observed in planar 2D x-ray mammography. Knowledge of this field is necessary as part of any correction scheme and for computer modelling and optimisation of this examination. Monte Carlo (MC) simulations are often used for this purpose, however they are computationally expensive and a more rapid method of calculation is desirable. This issue is addressed in this work by the development of a fast kernel-based methodology for scatter field estimation using a detailed realistic DBT geometry. Thickness-dependent scatter kernels, which were validated against the literature with a maximum discrepancy of 4% for an idealised geometry, have been calculated and a new physical parameter (air gap distance) was used to estimate more accurately the distribution of scattered radiation for a series of anthropomorphic breast phantom models. The proposed methodology considers, for the first time, the effects of scattered radiation from the compression paddle and breast support plate, which can represent more than 30% of the total scattered radiation recorded within the image receptor. The results show that the scatter field estimator can calculate scattered radiation images in an average of 80 min for projection angles up to 25° with equal to or less than a 10% error across most of the breast area when compared with direct MC simulations.
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean J; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
NASA Astrophysics Data System (ADS)
Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas
2016-04-01
The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
NASA Astrophysics Data System (ADS)
Foschum, Florian; Kienle, Alwin
2013-08-01
We present simulations and measurements with an optimized goniometer for determination of the scattering phase function of suspended particles. We applied the Monte Carlo method, using a radially layered cylindrical geometry and mismatched boundary conditions, in order to investigate the influence of reflections caused by the interfaces of the glass cuvette and the scatterer concentration on the accurate determination of the scattering phase function. Based on these simulations we built an apparatus which allows direct measurement of the phase function from ϑ=7 deg to ϑ=172 deg without any need for correction algorithms. Goniometric measurements on polystyrene and SiO2 spheres proved this concept. Using the validated goniometer, we measured the phase function of yeast cells, demonstrating the improvement of the new system compared to standard goniometers. Furthermore, the scattering phase function of different fat emulsions, like Intralipid, was determined precisely.
NASA Astrophysics Data System (ADS)
Kedziera, Dariusz; Mentel, Łukasz; Żuchowski, Piotr S.; Knoop, Steven
2015-06-01
We have obtained accurate ab initio +4Σ quartet potentials for the diatomic metastable triplet helium+alkali-metal (Li, Na, K, Rb) systems, using all-electron restricted open-shell coupled cluster singles and doubles with noniterative triples corrections CCSD(T) calculations and accurate calculations of the long-range C6 coefficients. These potentials provide accurate ab initio quartet scattering lengths, which for these many-electron systems is possible, because of the small reduced masses and shallow potentials that result in a small amount of bound states. Our results are relevant for ultracold metastable triplet helium+alkali-metal mixture experiments.
A correction to a highly accurate voight function algorithm
NASA Technical Reports Server (NTRS)
Shippony, Z.; Read, W. G.
2002-01-01
An algorithm for rapidly computing the complex Voigt function was published by Shippony and Read. Its claimed accuracy was 1 part in 10^8. It was brought to our attention by Wells that Shippony and Read was not meeting its claimed accuracy for extremely small but non zero y values. Although true, the fix to the code is so trivial to warrant this note for those who use this algorithm.
Integral method of wall interference correction in low-speed wind tunnels
NASA Technical Reports Server (NTRS)
Zhou, Changhai
1987-01-01
The analytical solution of Poisson's equation, derived form the definition of vortex, was applied to the calculation of interference velocities due to the presence of wind tunnel walls. This approach, called the Integral Method, allows an accurate evaluation of wall interference for separated or more complicated flows without the need for considering any features of the model. All the information necessary for obtaining the wall correction is contained in wall pressure measurements. The correction is not sensitive to normal data-scatter, and the computations are fast enough for on-line data processing.
A New Polyethylene Scattering Law Determined Using Inelastic Neutron Scattering
Lavelle, Christopher M; Liu, C; Stone, Matthew B
2013-01-01
Monte Carlo neutron transport codes such as MCNP rely on accurate data for nuclear physics cross-sections to produce accurate results. At low energy, this takes the form of scattering laws based on the dynamic structure factor, S (Q, E). High density polyethylene (HDPE) is frequently employed as a neutron moderator at both high and low temperatures, however the only cross-sections available are for T =300 K, and the evaluation has not been updated in quite some time. In this paper we describe inelastic neutron scattering measurements on HDPE at 5 and 300 K which are used to improve the scattering law for HDPE. We describe the experimental methods, review some of the past HDPE scattering laws, and compare computations using these models to the measured S (Q, E). The total cross-section is compared to available data, and the treatment of the carbon secondary scatterer as a free gas is assessed. We also discuss the use of the measurement itself as a scattering law via the 1 phonon approximation. We show that a scattering law computed using a more detailed model for the Generalized Density of States (GDOS) compares more favorably to this experiment, suggesting that inelastic neutron scattering can play an important role in both the development and validation of new scattering laws for Monte Carlo work.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.
Microcavity Enhanced Raman Scattering
NASA Astrophysics Data System (ADS)
Petrak, Benjamin J.
Raman scattering can accurately identify molecules by their intrinsic vibrational frequencies, but its notoriously weak scattering efficiency for gases presents a major obstacle to its practical application in gas sensing and analysis. This work explores the use of high finesse (≈50 000) Fabry-Perot microcavities as a means to enhance Raman scattering from gases. A recently demonstrated laser ablation method, which carves out a micromirror template on fused silica--either on a fiber tip or bulk substrates-- was implemented, characterized, and optimized to fabricate concave micromirror templates ˜10 mum diameter and radius of curvature. The fabricated templates were coated with a high-reflectivity dielectric coating by ion-beam sputtering and were assembled into microcavities ˜10 mum long and with a mode volume ˜100 mum 3. A novel gas sensing technique that we refer to as Purcell enhanced Raman scattering (PERS) was demonstrated using the assembled microcavities. PERS works by enhancing the pump laser's intensity through resonant recirculation at one longitudinal mode, while simultaneously, at a second mode at the Stokes frequency, the Purcell effect increases the rate of spontaneous Raman scattering by a change to the intra-cavity photon density of states. PERS was shown to enhance the rate of spontaneous Raman scattering by a factor of 107 compared to the same volume of sample gas in free space scattered into the same solid angle subtended by the cavity. PERS was also shown capable of resolving several Raman bands from different isotopes of CO2 gas for application to isotopic analysis. Finally, the use of the microcavity to enhance coherent anti-Stokes Raman scattering (CARS) from CO2 gas was demonstrated.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Implementation and evaluation of a calculated attenuation correction for PET
Siegel, S.; Dahlbom, M. . Dept. of Radiological Sciences)
1992-08-01
A limiting factor in PET is the necessity of a transmission scan for attenuation correction (AC). In areas of uniform attenuation, this measured AC can be replaced by a calculated AC. This paper presents an accurate and efficient method based on estimating the object contour from the emission sinograms. The method relies on a robust algorithm to determine the border between activity and scatter background. In this work, the authors present an algorithm that has been consistent in finding the object outline for a variety of tracers ([sup 18]F-FDG, [sup 18]F-FDOPA, [sup 15]O-water and [sup 13]N-ammonia), extreme uptake distributions (brain tumors and hemispherectomies) and system geometries, with little operator intervention. FDG brain scans using this algorithm were compared to images corrected using measured AC, showing a maximum deviation of [plus minus] 8.9%. The algorithm has been extended to abdominal PET scans and 3-D acquisitions.
Low-dose and scatter-free cone-beam CT imaging: a preliminary study
NASA Astrophysics Data System (ADS)
Dong, Xue; Jia, Xun; Niu, Tianye; Zhu, Lei
2012-03-01
Clinical applications of CBCT imaging are still limited by excessive imaging dose from repeated scans and poor image quality mainly due to scatter contamination. Compressed sensing (CS) reconstruction algorithms have shown promises in recovering faithful signals from low-dose projection data, but do not serve well the needs of accurate CBCT imaging if effective scatter correction is not in place. Scatter can be accurately measured and removed using measurement-based methods. However, in conventional FDK reconstruction, these approaches are considered unpractical since they require multiple scans or moving the beam blocker during the data acquisition to compensate for the inevitable primary loss. In this work, we combine the measurement-based scatter correction and CS-based iterative reconstruction algorithm, such that scatter-free images can be obtained from low-dose data. We lower the CBCT dose by reducing the projection number and inserting lead strips between the x-ray source and the object. The insertion of lead strips also enables scatter measurement on the measured samples inside the strip shadows. CS-based iterative reconstruction is finally carried out to obtain scatter-free and low-dose CBCT images. Simulation studies are designed to optimize the lead strip geometry for a certain dose reduction ratio. After optimization, our approach reduces the CT number error from over 220HU to below 5HU on the Shepp-Logan phantom, with a dose reduction of ~80%. With the same dose reduction and the optimized method parameters, the CT number error is reduced from 242HU to 20HU in the selected region of interest on Catphan©600 phantom.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Modeling of scattering from ice surfaces
NASA Astrophysics Data System (ADS)
Dahlberg, Michael Ross
Theoretical research is proposed to study electromagnetic wave scattering from ice surfaces. A mathematical formulation that is more representative of the electromagnetic scattering from ice, with volume mechanisms included, and capable of handling multiple scattering effects is developed. This research is essential to advancing the field of environmental science and engineering by enabling more accurate inversion of remote sensing data. The results of this research contributed towards a more accurate representation of the scattering from ice surfaces, that is computationally more efficient and that can be applied to many remote-sensing applications.
Modeling diffuse reflectance measurements of light scattered by layered tissues
NASA Astrophysics Data System (ADS)
Rohde, Shelley B.
In this dissertation, we first present a model for the diffuse reflectance due to a continuous beam incident normally on a half space composed of a uniform scattering and absorbing medium. This model is the result of an asymptotic analysis of the radiative transport equation for strong scattering, weak absorption and a defined beam width. Through comparison with the diffuse reflectance computed using the numerical solution of the radiative transport equation, we show that this diffuse reflectance model gives results that are accurate for small source-detector separation distances. We then present an explicit model for the diffuse reflectance due to a collimated beam of light incident normally on layered tissues. This model is derived using the corrected diffusion approximation applied to a layered medium, and it takes the form of a convolution with an explicit kernel and the incident beam profile. This model corrects the standard diffusion approximation over all source-detector separation distances provided the beam is sufficiently wide compared to the scattering mean-free path. We validate this model through comparison with Monte Carlo simulations. Then we use this model to estimate the optical properties of an epithelial layer from Monte Carlo simulation data. Using measurements at small source-detector separations and this model, we are able to estimate the absorption coefficient, scattering coefficient and anisotropy factor of epithelial tissues efficiently with reasonable accuracy. Finally, we present an extension of the corrected diffusion approximation for an obliquely incident beam. This model is formed through a Fourier Series representation in the azimuthal angle which allows us to exhibit the break in axisymmetry when combined with the previous analysis. We validate this model with Monte Carlo simulations. This model can also be written in the form of a convolution of an explicit kernel with the incident beam profile. Additionally, it can be used to
ERIC Educational Resources Information Center
Hill, Leslie A.
1978-01-01
Discusses some general principles for planning corrective instruction and exercises in English as a second language, and follows with examples from the areas of phonemics, phonology, lexicon, idioms, morphology, and syntax. (IFS/WGA)
Interstellar scattering and resolution limitations
NASA Astrophysics Data System (ADS)
Dennison, Brian
Density irregularities in both the interplanetary medium and the ionized component of the interstellar medium scatter radio waves, resulting in limitations on the achievable resolution. Interplanetary scattering (IPS) is weak for most observational situations, and in principle the resulting phase corruption can be corrected for when observing with sufficiently many array elements. Interstellar scattering (ISS), on the other hand, is usually strong at frequencies below about 8 GHz, in which case intrinsic structure information over a range of angular scales is irretrievably lost. With the earth-space baselines now planned, it will be possible to search directly for interstellar refraction, which is suspected of modulating the fluxes of background sources.
NASA Astrophysics Data System (ADS)
Desjarlais, Michael P.; Scullard, Christian R.; Benedict, Lorin X.; Whitley, Heather D.; Redmer, Ronald
2017-03-01
We compute electrical and thermal conductivities of hydrogen plasmas in the nondegenerate regime using Kohn-Sham density functional theory (DFT) and an application of the Kubo-Greenwood response formula, and demonstrate that for thermal conductivity, the mean-field treatment of the electron-electron (e-e) interaction therein is insufficient to reproduce the weak-coupling limit obtained by plasma kinetic theories. An explicit e-e scattering correction to the DFT is posited by appealing to Matthiessen's Rule and the results of our computations of conductivities with the quantum Lenard-Balescu (QLB) equation. Further motivation of our correction is provided by an argument arising from the Zubarev quantum kinetic theory approach. Significant emphasis is placed on our efforts to produce properly converged results for plasma transport using Kohn-Sham DFT, so that an accurate assessment of the importance and efficacy of our e-e scattering corrections to the thermal conductivity can be made.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Quantitative Scattering of Melanin Solutions
NASA Astrophysics Data System (ADS)
Riesz, J.; Gilmore, J.; Meredith, P.
2006-06-01
The optical scattering coefficient of a dilute, well solubilised eumelanin solution has been accurately measured as a function of incident wavelength, and found to contribute less than 6% of the total optical attenuation between 210 and 325nm. At longer wavelengths (325nm to 800nm) the scattering was less than the minimum sensitivity of our instrument. This indicates that UV and visible optical density spectra can be interpreted as true absorption with a high degree of confidence. The scattering coefficient vs wavelength was found to be consistent with Rayleigh Theory for a particle radius of 38+-1nm.
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
Metzler, Adam M; Siegmann, William L; Collins, Michael D
2012-02-01
The parabolic equation method with a single-scattering correction allows for accurate modeling of range-dependent environments in elastic layered media. For problems with large contrasts, accuracy and efficiency are gained by subdividing vertical interfaces into a series of two or more single-scattering problems. This approach generates several computational parameters, such as the number of interface slices, an iteration convergence parameter τ, and the number of iterations n for convergence. Using a narrow-angle approximation, the choices of n=1 and τ=2 give accurate solutions. Analogous results from the narrow-angle approximation extend to environments with larger variations when slices are used as needed at vertical interfaces. The approach is applied to a generic ocean waveguide that includes the generation of a Rayleigh interface wave. Results are presented in both frequency and time domains.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
NASA Astrophysics Data System (ADS)
Ouyang, Wei; Mao, Weijian; Li, Xuelei; Li, Wuqun
2014-08-01
Sound velocity inversion problem based on scattering theory is formulated in terms of a nonlinear integral equation associated with scattered field. Because of its nonlinearity, in practice, linearization algorisms (Born/single scattering approximation) are widely used to obtain an approximate inversion solution. However, the linearized strategy is not congruent with seismic wave propagation mechanics in strong perturbation (heterogeneous) medium. In order to partially dispense with the weak perturbation assumption of the Born approximation, we present a new approach from the following two steps: firstly, to handle the forward scattering by taking into account the second-order Born approximation, which is related to generalized Radon transform (GRT) about quadratic scattering potential; then to derive a nonlinear quadratic inversion formula by resorting to inverse GRT. In our formulation, there is a significant quadratic term regarding scattering potential, and it can provide an amplitude correction for inversion results beyond standard linear inversion. The numerical experiments demonstrate that the linear single scattering inversion is only good in amplitude for relative velocity perturbation () of background media up to 10 %, and its inversion errors are unacceptable for the perturbation beyond 10 %. In contrast, the quadratic inversion can give more accurate amplitude-preserved recovery for the perturbation up to 40 %. Our inversion scheme is able to manage double scattering effects by estimating a transmission factor from an integral over a small area, and therefore, only a small portion of computational time is added to the original linear migration/inversion process.
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
Vernon, M.F.
1983-07-01
The molecular-beam technique has been used in three different experimental arrangements to study a wide range of inter-atomic and molecular forces. Chapter 1 reports results of a low-energy (0.2 kcal/mole) elastic-scattering study of the He-Ar pair potential. The purpose of the study was to accurately characterize the shape of the potential in the well region, by scattering slow He atoms produced by expanding a mixture of He in N/sub 2/ from a cooled nozzle. Chapter 2 contains measurements of the vibrational predissociation spectra and product translational energy for clusters of water, benzene, and ammonia. The experiments show that most of the product energy remains in the internal molecular motions. Chapter 3 presents measurements of the reaction Na + HCl ..-->.. NaCl + H at collision energies of 5.38 and 19.4 kcal/mole. This is the first study to resolve both scattering angle and velocity for the reaction of a short lived (16 nsec) electronic excited state. Descriptions are given of computer programs written to analyze molecular-beam expansions to extract information characterizing their velocity distributions, and to calculate accurate laboratory elastic-scattering differential cross sections accounting for the finite apparatus resolution. Experimental results which attempted to determine the efficiency of optically pumping the Li(2/sup 2/P/sub 3/2/) and Na(3/sup 2/P/sub 3/2/) excited states are given. A simple three-level model for predicting the steady-state fraction of atoms in the excited state is included.
Coupling of multiple coulomb scattering and energy loss and straggling in HZETRN
NASA Astrophysics Data System (ADS)
Mertens, C. J.; Walker, S. A.; Wilson, J. W.; Singleterry, R. C.; Tweed, J.
Current developments in HZETRN are focused towards a full three-dimensional and computationally efficient deterministic transport code capable of simulating radiation transport with either space or laboratory boundary conditions One aspect of the new version of HZETRN is the inclusion of small-angle multiple Coulomb scattering of incident ions by target nuclei While the effects of multiple scattering are negligible in the space radiation environment multiple scattering must be included in laboratory transport code simulations to accurately model ion beam experiments to simulate the physical and biological-effective radiation dose and to develop new methods and strategies for light ion radiation therapy In this paper we present the theoretical formalism and computation procedures for incorporating multiple scattering into HZETRN and coupling the ion-nuclear scattering interactions with energy loss and straggling Simulations of the effects of multiple scattering on ion beam characterization will be compared with results from laboratory measurements which include path-length corrections angular and lateral broadening and absorbed dose
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
On the role of scattering and reverberation in seismic interferometry
NASA Astrophysics Data System (ADS)
Boschi, Lapo; Colombi, Andrea; Roux, Philippe; Campillo, Michel
2013-04-01
The ensemble-averaged ambient wave field observed on Earth is approximately diffuse, and it is precisely this property that makes ambient-noise interferometry valid within approximation. How close is ambient noise to being exactly diffuse? What features of the Earth (coupling between oceans and solid Earth, scattering by crustal heterogeneities...) contribute to its randomness and complexity? It is necessary to understand the roles of scattering and reverberation, to determine the range of applicability of seismic interferometry. Studies of cross-correlation of late coda in earthquake data, conducted mostly by the Grenoble group, emphasize the contributions of scattering in the interferometric reconstruction of Green functions. Yet, other authors like R. Snieder and co-workers point to the limitations that the presence of a complex (scattering) structure introduces: they have noted, in particular, that although direct surface waves are accurately extracted by interferometry, examples of the reconstruction of scattered waves are still lacking. We analyze the cross-correlations of diffuse flexural waves, generated by an air nozzle shooting compressed air on a 1-square-meter aluminum plate, and recorded by two accelerometers on the plate. Flexural waves are dispersive, thus reproducing one of the main characteristics of surface waves observed on Earth. The aluminum plate is pierced by 500 randomly distributed holes (6mm in diameter) that give rise to scattering. Seismic noise is known to be largely generated by the coupling between atmosphere and solid Earth, and the air-nozzle approach can be seen as a way to reproduce this phenomenon as realistically as possible in a laboratory. We find ensemble-averaged cross-correlations of the so generated diffuse flexural wave field to be strongly symmetric with respect to (causal and anti-causal) time, beyond the direct flexural-wave arrivals; this indicates that the Green function is correctly reconstructed, including
Simple analytic expressions for correcting the factorizable formula for Compton
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2016-05-01
The factorizable form of the relativistic impulse approximation (RIA) expression for Compton scattering doubly differential cross sections (DDCS) becomes progressively less accurate as the binding energy of the ejected electron increases. This expression, which we call the RKJ approximation, makes it possible to obtain the Compton profile (CP) from measured DDCS. We have derived three simple analytic expressions, each which can be used to correct the RKJ error for the atomic K-shell CP obtained from DDCS for any atomic number Z. The expression which is the most general is valid over a broad range of energy ω and scattering angle θ, a second expression which is somewhat simpler is valid at very high ω but over most θ, and the third which is the simplest is valid at small θ over a broad range of ω. We demonstrate that such expressions can yield a CP accurate to within a 1% error over 99% of the electron momentum distribution range of the Uranium K-shell CP. Since the K-shell contribution dominates the extremes of the whole atom CP (this is where the error of RKJ can exceed an order of magnitude), this region can be of concern in assessing the bonding properties of molecules as well as semiconducting materials.
Surface consistent finite frequency phase corrections
NASA Astrophysics Data System (ADS)
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Quantifying intraocular scatter with near diffraction-limited double-pass point spread function
Zhao, Junlei; Xiao, Fei; Kang, Jian; Zhao, Haoxin; Dai, Yun; Zhang, Yudong
2016-01-01
Measurement of the double-pass (DP) point-spread function (PSF) can provide an objective and non-invasive method for estimating intraocular scatter in the human eye. The objective scatter index (OSI), which is calculated from the DP PSF images, is commonly used to quantify intraocular scatter. In this article, we simulated the effect of higher-order ocular aberrations on OSI, and the results showed that higher-order ocular aberrations had a significant influence on OSI. Then we developed an adaptive optics DP PSF measurement system (AO-DPPMS) which was capable of correcting ocular aberrations up to eighth-order radial Zernike modes over a 6.0-mm pupil. Employing this system, we obtained DP PSF images of four subjects at the fovea. OSI values with aberrations corrected up to 2nd, 5th and 8th Zernike order were calculated respectively, from the DP PSF images of the four subjects. The experimental results were consistent with the simulation, suggesting that it is necessary to compensate for the higher-order ocular aberrations for accurate intraocular scatter estimation. PMID:27895998
Experimental validation of a multi-energy x-ray adapted scatter separation method
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-12-01
Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Skidmore, Andrew K.; Wang, Tiejun; Darvishzadeh, Roshanak; Heiden, Uta; Heurich, Marco; Latifi, Hooman; Hearne, John
2017-02-01
A statistical relationship between canopy mass-based foliar nitrogen concentration (%N) and canopy bidirectional reflectance factor (BRF) has been repeatedly demonstrated. However, the interaction between leaf properties and canopy structure confounds the estimation of foliar nitrogen. The canopy scattering coefficient (the ratio of BRF and the directional area scattering factor, DASF) has recently been suggested for estimating %N as it suppresses the canopy structural effects on BRF. However, estimation of %N using the scattering coefficient has not yet been investigated for longer spectral wavelengths (>855 nm). We retrieved the canopy scattering coefficient for wavelengths between 400 and 2500 nm from airborne hyperspectral imagery, and then applied a continuous wavelet analysis (CWA) to the scattering coefficient in order to estimate %N. Predictions of %N were also made using partial least squares regression (PLSR). We found that %N can be accurately retrieved using CWA (R2 = 0.65, RMSE = 0.33) when four wavelet features are combined, with CWA yielding a more accurate estimation than PLSR (R2 = 0.47, RMSE = 0.41). We also found that the wavelet features most sensitive to %N variation in the visible region relate to chlorophyll absorption, while wavelet features in the shortwave infrared regions relate to protein and dry matter absorption. Our results confirm that %N can be retrieved using the scattering coefficient after correcting for canopy structural effect. With the aid of high-fidelity airborne or upcoming space-borne hyperspectral imagery, large-scale foliar nitrogen maps can be generated to improve the modeling of ecosystem processes as well as ecosystem-climate feedbacks.
Roy-Steiner-equation analysis of pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.
2016-04-01
We review the structure of Roy-Steiner equations for pion-nucleon scattering, the solution for the partial waves of the t-channel process ππ → N ¯ N, as well as the high-accuracy extraction of the pion-nucleon S-wave scattering lengths from data on pionic hydrogen and deuterium. We then proceed to construct solutions for the lowest partial waves of the s-channel process πN → πN and demonstrate that accurate solutions can be found if the scattering lengths are imposed as constraints. Detailed error estimates of all input quantities in the solution procedure are performed and explicit parameterizations for the resulting low-energy phase shifts as well as results for subthreshold parameters and higher threshold parameters are presented. Furthermore, we discuss the extraction of the pion-nucleon σ-term via the Cheng-Dashen low-energy theorem, including the role of isospin-breaking corrections, to obtain a precision determination consistent with all constraints from analyticity, unitarity, crossing symmetry, and pionic-atom data. We perform the matching to chiral perturbation theory in the subthreshold region and detail the consequences for the chiral convergence of the threshold parameters and the nucleon mass.
A weak-scattering model for turbine-tone haystacking
NASA Astrophysics Data System (ADS)
McAlpine, A.; Powles, C. J.; Tester, B. J.
2013-08-01
Noise and emissions are critical technical issues in the development of aircraft engines. This necessitates the development of accurate models to predict the noise radiated from aero-engines. Turbine tones radiated from the exhaust nozzle of a turbofan engine propagate through turbulent jet shear layers which causes scattering of sound. In the far-field, measurements of the tones may exhibit spectral broadening, where owing to scattering, the tones are no longer narrow band peaks in the spectrum. This effect is known colloquially as 'haystacking'. In this article a comprehensive analytical model to predict spectral broadening for a tone radiated through a circular jet, for an observer in the far field, is presented. This model extends previous work by the authors which considered the prediction of spectral broadening at far-field observer locations outside the cone of silence. The modelling uses high-frequency asymptotic methods and a weak-scattering assumption. A realistic shear layer velocity profile and turbulence characteristics are included in the model. The mathematical formulation which details the spectral broadening, or haystacking, of a single-frequency, single azimuthal order turbine tone is outlined. In order to validate the model, predictions are compared with experimental results, albeit only at polar angle equal to 90°. A range of source frequencies from 4 to 20kHz, and jet velocities from 20 to 60ms-1, are examined for validation purposes. The model correctly predicts how the spectral broadening is affected when the source frequency and jet velocity are varied.
Johnson, D
1940-03-22
IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.
77 FR 72199 - Technical Corrections; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear Regulatory... corrections, including updating the street address for the Region I office, correcting authority citations and... rule. DATES: The correction is effective on December 5, 2012. FOR FURTHER INFORMATION CONTACT:...
Three-dimensional phantoms for curvature correction in spatial frequency domain imaging
Nguyen, Thu T. A.; Le, Hanh N. D.; Vo, Minh; Wang, Zhaoyang; Luu, Long; Ramella-Roman, Jessica C.
2012-01-01
The sensitivity to surface profile of non-contact optical imaging, such as spatial frequency domain imaging, may lead to incorrect measurements of optical properties and consequently erroneous extrapolation of physiological parameters of interest. Previous correction methods have focused on calibration-based, model-based, and computation-based approached. We propose an experimental method to correct the effect of surface profile on spectral images. Three-dimensional (3D) phantoms were built with acrylonitrile butadiene styrene (ABS) plastic using an accurate 3D imaging and an emergent 3D printing technique. In this study, our method was utilized for the correction of optical properties (absorption coefficient μa and reduced scattering coefficient μs′) of objects obtained with a spatial frequency domain imaging system. The correction method was verified on three objects with simple to complex shapes. Incorrect optical properties due to surface with minimum 4 mm variation in height and 80 degree in slope were detected and improved, particularly for the absorption coefficients. The 3D phantom-based correction method is applicable for a wide range of purposes. The advantages and drawbacks of the 3D phantom-based correction methods are discussed in details. PMID:22741068
Modeling stray light from rough surfaces and subsurface scatter
NASA Astrophysics Data System (ADS)
Harvey, James E.; Goshy, John J.; Pfisterer, Richard N.
2014-09-01
Over the years we have developed an adequate theory and understanding of surface scatter from smooth optical surfaces (Rayleigh-Rice), moderately rough surfaces with paraxial incident and scattered angles (Beckmann- Kirchhoff) and even for moderately rough surfaces with arbitrary incident and scattered angles where a linear systems formulation requiring a two-parameter family of surface transfer functions is required to characterize the surface scatter process (generalized Harvey-Shack). However, there is always some new material or surface manufacturing process that provides non-intuitive scatter behavior. The linear systems formulation of surface scatter is potentially useful even for these situations. In this paper we will present empirical models of several classes of rough surfaces or materials (subsurface scatter) that allow us to accurately model the scattering behavior at any incident angle from limited measured scatter data. In particular, scattered radiance appears to continue being the natural quantity that exhibits simple, elegant behavior only in direction cosine space.
Scattering lengths in isotopologues of the RbYb system
NASA Astrophysics Data System (ADS)
Borkowski, Mateusz; Żuchowski, Piotr S.; Ciuryło, Roman; Julienne, Paul S.; Kędziera, Dariusz; Mentel, Łukasz; Tecmer, Paweł; Münchow, Frank; Bruni, Cristian; Görlitz, Axel
2013-11-01
We model the binding energies of rovibrational levels of the RbYb molecule using experimental data from two-color photoassociation spectroscopy in mixtures of ultracold 87Rb with various Yb isotopes. The model uses a theoretical potential based on state-of-the-art ab initio potentials, further improved by least-squares fitting to the experimental data. We have fixed the number of bound states supported by the potential curve, so that the model is mass scaled, that is, it accurately describes the bound-state energies for all measured isotopic combinations. Such a model enables an accurate prediction of the s-wave scattering lengths of all isotopic combinations of the RbYb system. The reduced mass range is broad enough to cover the full scattering lengths range from -∞ to +∞. For example, the 87Rb174Yb system is characterized by a large positive scattering length of +880(120) a.u., while 87Rb173Yb has a=-626(88) a.u. On the other hand 87Rb170Yb has a very small scattering length of -11.5(2.5) a.u. confirmed by the pair's extremely low thermalization rate. For isotopic combinations including 85Rb the variation of the interspecies scattering lengths is much smoother ranging from +39.0(1.6) a.u. for 85Rb176Yb to +230(12) a.u. in the case of 85Rb168Yb. Hyperfine corrections to these scattering lengths are also given. We further complement the fitted potential with interaction parameters calculated from alternative methods. The recommended value of the van der Waals coefficient is C6=2837(13) a.u. agrees with but is more precise than the current state-of-the-art theoretical predictions [M. S. Safronova, S. G. Porsev, and C. W. Clark, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.230802 109, 230802 (2012)].
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
An optical model for composite nuclear scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Townsend, L. W.
1981-01-01
The optical model of composite particle scattering is considered and compared to the accuracies of other models. A nonrelativistic Schroedinger equation with two-body potentials is used for the scattering of a single particle by an energy-dependent local potential. The potential for the elastic channel is composed of matrix elements of a single scattering operator taken between the ground states of the projectile and the target; the coherent amplitude is considered as dominating the scattering in the forward direction. A multiple scattering series is analytically explored and formally summed by the solution of an equivalent Schroedinger equation. Cross sections of nuclear scattering are then determined for He-4 and C-12 nuclei at 3.6 GeV/nucleus and O-16 projectiles at 2.1 GeV/nucleus, and the optical model approximations are found to be consistently lower and more accurate than approximations made by use of Glauber's theory.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Direct Calculation of the Scattering Amplitude Without Partial Wave Analysis
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Two new developments in scattering theory are reported. We show, in a practical way, how one can calculate the full scattering amplitude without invoking a partial wave expansion. First, the integral expression for the scattering amplitude f(theta) is simplified by an analytic integration over the azimuthal angle. Second, the full scattering wavefunction which appears in the integral expression for f(theta) is obtained by solving the Schrodinger equation with the finite element method (FEM). As an example, we calculate electron scattering from the Hartree potential. With minimal computational effort, we obtain accurate and stable results for the scattering amplitude.
Correction technique for cascade gammas in I-124 imaging on a fully-3D, Time-of-Flight PET Scanner.
Surti, Suleman; Scheuermann, Ryan; Karp, Joel S
2009-06-01
It has been shown that I-124 PET imaging can be used for accurate dose estimation in radio-immunotherapy techniques. However, I-124 is not a pure positron emitter, leading to two types of coincidence events not typically encountered: increased random coincidences due to non-annihilation cascade photons, and true coincidences between an annihilation photon and primarily a coincident 602 keV cascade gamma (true coincidence gamma-ray background). The increased random coincidences are accurately estimated by the delayed window technique. Here we evaluate the radial and time distributions of the true coincidence gamma-ray background in order to correct and accurately estimate lesion uptake for I-124 imaging in a time-of-flight (TOF) PET scanner. We performed measurements using a line source of activity placed in air and a water-filled cylinder, using F-18 and I-124 radio-isotopes. Our results show that the true coincidence gamma-ray backgrounds in I-124 have a uniform radial distribution, while the time distribution is similar to the scattered annihilation coincidences. As a result, we implemented a TOF-extended single scatter simulation algorithm with a uniform radial offset in the tail-fitting procedure for accurate correction of TOF data in I-124 imaging. Imaging results show that the contrast recovery for large spheres in a uniform activity background is similar in F-18 and I-124 imaging. There is some degradation in contrast recovery for small spheres in I-124, which is explained by the increased positron range, and reduced spatial resolution, of I-124 compared to F-18. Our results show that it is possible to perform accurate TOF based corrections for I-124 imaging.
Scattered-wave-packet formalism with applications to barrier scattering and quantum transistors.
Chou, Chia-Chun; Wyatt, Robert E
2011-11-01
The scattered wave formalism developed for a quantum subsystem interacting with reservoirs through open boundaries is applied to one- or two-dimensional barrier scattering and quantum transistors. The total wave function is divided into incident and scattered components. Markovian outgoing wave boundary conditions are imposed on the scattered or total wave function by either the ratio or polynomial methods. For barrier scattering problems, accurate time-dependent transmission probabilities are obtained through the integration of the modified time-dependent Schrödinger equations for the scattered wave function. For quantum transistors, the time-dependent transport is studied for a quantum wave packet propagating through the conduction channel of a field effect transistor. This study shows that the scattered wave formalism significantly reduces computational effort relative to other open boundary methods and demonstrates wide applications to quantum dynamical processes.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Kepler Predictor-Corrector Algorithm: Scattering Dynamics with One-Over-R Singular Potentials.
Markmann, Andreas; Graziani, Frank; Batista, Victor S
2012-01-10
An accurate and efficient algorithm for dynamics simulations of particles with attractive 1/r singular potentials is introduced. The method is applied to semiclassical dynamics simulations of electron-proton scattering processes in the Wigner-transform time-dependent picture, showing excellent agreement with full quantum dynamics calculations. Rather than avoiding the singularity problem by using a pseudopotential, the algorithm predicts the outcome of close-encounter two-body collisions for the true 1/r potential by solving the Kepler problem analytically and corrects the trajectory for multiscattering with other particles in the system by using standard numerical techniques (e.g., velocity Verlet, or Gear Predictor corrector algorithms). The resulting integration is time-reversal symmetric and can be applied to the general multibody dynamics problem featuring close encounters as occur in electron-ion scattering events, in particle-antiparticle dynamics, as well as in classical simulations of charged interstellar gas dynamics and gravitational celestial mechanics.
NASA Astrophysics Data System (ADS)
Zhang, Jie; Felice, Maria; Velichko, Alexander; Wilcox, Paul
2016-02-01
The scattering behaviour of a finite-sized elastodynamic scatterer in a homogeneous isotropic medium can be encapsulated in a scattering matrix (S-matrix) for each wave mode combination. Each S-matrix is a continuous complex function of 3 variables: incident wave angle, scattered wave angle and frequency. In the paper, the S-matrices for various scatterers (circular holes, straight smooth cracks, rough cracks and 4 circular holes in an area of interest) are investigated. It is shown that, for a given scatterer, the continuous data in the angular dimensions of an S-matrix can be represented to a prescribed level of accuracy by a finite number of complex Fourier coefficients. The finding is that the number of angular orders required to characterise a scatterer is a function of scatterer size and is related to the Nyquist theorem. The variation of scattering behaviour with frequency is examined next and is found to show periodic oscillation with a period which is a function of scatterer size and its geometry. The shortest period of these oscillations indicates the maximum frequency increment required to accurately describe the scattering behaviour in a specific frequency range. Finally, the maximum angular order and frequency increments for the chosen scatterers in a specific frequency range are suggested.
78 FR 75449 - Miscellaneous Corrections; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-12
..., 50, 52, and 70 RIN 3150-AJ23 Miscellaneous Corrections; Corrections AGENCY: Nuclear Regulatory... final rule in the Federal Register on June 7, 2013, to make miscellaneous corrections to its regulations... miscellaneous corrections to its regulations in chapter I of Title 10 of the Code of Federal Regulations (10...
NASA Astrophysics Data System (ADS)
Brochu, Frederic M.; Joseph, James; Tomaszewski, Michal R.; Bohndiek, Sarah E.
2016-03-01
Optoacoustic Tomography is a fast developing imaging modality, combining the high resolution and penetration depth of ultrasound detection with the high contrast available from optical absorption in tissue. The spectral profile of near infrared excitation light used in optoacoustic tomography instruments is modified by absorption and scattering as it propagates deep into biological tissue. The resulting images therefore provide only qualitative insight into the distribution of tissue chromophores. Knowledge of the spectral profile of excitation light across the mouse is needed for accurate determination of the absorption coefficient in vivo. Under the conditions of constant Grueneisen parameter and accurate knowledge of the light fluence, a linear relationship should exist between the initial optoacoustic pressure amplitude and the tissue absorption coefficient. Using data from a commercial optoacoustic tomography system, we implemented an iterative optimization based on the σ-Eddington approximation to the Radiative Transfer Equation to derive a light fluence map within a given object. We segmented the images based on the positions of phantom inclusions, or mouse organs, and used known scattering coefficients for initialization. Performing the fluence correction in simple phantoms allowed the expected linear relationship between recorded and independently measured absorption coefficients to be retrieved and spectral coloring to be compensated. For in vivo data, the correction resulted in an enhancement of signal intensities in deep tissues. This improved our ability to visualize organs at depth (> 5mm). Future work will aim to perform the optimization without data normalization and explore the need for methodology that enables routine implementation for in vivo imaging.
The prediction of Neutron Elastic Scattering from Tritium for E(n) = 6-14 MeV
Anderson, J D; Dietrich, F S; Luu, T; McNabb, D P; Navratil, P; Quaglioni, S
2010-06-14
In a recent report Navratil et al. evaluated the angle-integrated cross section and the angular distribution for 14-MeV n+T elastic scattering by inferring these cross sections from accurately measured p+3He angular distributions. This evaluation used a combination of two theoretical treatments, based on the no-core shell model and resonating-group method (NCSM/RGM) and on the R-matrix formalism, to connect the two charge-symmetric reactions n+T and p+{sup 3}He. In this report we extend this treatment to cover the neutron incident energy range 6-14 MeV. To do this, we evaluate angle-dependent correction factors for the NCSM/RGM calculations so that they agree with the p+{sup 3}He data near 6 MeV, and using the results found earlier near 14 MeV we interpolate these correction factors to obtain correction factors throughout the 6-14 MeV energy range. The agreement between the corrected NCSM/RGM and R-Matrix values for the integral elastic cross sections is excellent ({+-}1%), and these are in very good agreement with total cross section experiments. This result can be attributed to the nearly constant correction factors at forward angles, and to the evidently satisfactory physics content of the two calculations. The difference in angular shape, obtained by comparing values of the scattering probability distribution P({mu}) vs. {mu}(the cosine of the c.m. scattering angle), is about {+-}4% and appears to be related to differences in the two theoretical calculations. Averaging the calculations yields P({mu}) values with errors of {+-}2 1/2 % or less. These averaged values, along with the corresponding quantities for the differential cross sections, will form the basis of a new evaluation of n+T elastic scattering. Computer files of the results discussed in this report will be supplied upon request.
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam
NASA Astrophysics Data System (ADS)
Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad
2015-05-01
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging
NASA Astrophysics Data System (ADS)
Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.
2015-06-01
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.
Device accurately measures and records low gas-flow rates
NASA Technical Reports Server (NTRS)
Branum, L. W.
1966-01-01
Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.
Progress toward accurate high spatial resolution actinide analysis by EPMA
NASA Astrophysics Data System (ADS)
Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.
2010-12-01
High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation
Thomson scattering from laser plasmas
Moody, J D; Alley, W E; De Groot, J S; Estabrook, K G; Glenzer, S H; Hammer, J H; Jadaud, J P; MacGowan, B J; Rozmus, W; Suter, L J; Williams, E A
1999-01-12
Thomson scattering has recently been introduced as a fundamental diagnostic of plasma conditions and basic physical processes in dense, inertial confinement fusion plasmas. Experiments at the Nova laser facility [E. M. Campbell et al., Laser Part. Beams 9, 209 (1991)] have demonstrated accurate temporally and spatially resolved characterization of densities, electron temperatures, and average ionization levels by simultaneously observing Thomson scattered light from ion acoustic and electron plasma (Langmuir) fluctuations. In addition, observations of fast and slow ion acous- tic waves in two-ion species plasmas have also allowed an independent measurement of the ion temperature. These results have motivated the application of Thomson scattering in closed-geometry inertial confinement fusion hohlraums to benchmark integrated radiation-hydrodynamic modeling of fusion plasmas. For this purpose a high energy 4{omega} probe laser was implemented recently allowing ultraviolet Thomson scattering at various locations in high-density gas-filled hohlraum plasmas. In partic- ular, the observation of steep electron temperature gradients indicates that electron thermal transport is inhibited in these gas-filled hohlraums. Hydrodynamic calcula- tions which include an exact treatment of large-scale magnetic fields are in agreement with these findings. Moreover, the Thomson scattering data clearly indicate axial stagnation in these hohlraums by showing a fast rise of the ion temperature. Its timing is in good agreement with calculations indicating that the stagnating plasma will not deteriorate the implosion of the fusion capsules in ignition experiments.
Analog measurement of scattered optical fluctuations
NASA Astrophysics Data System (ADS)
Smith, P. R.; Green, D. A.
1995-12-01
A statistical model that describes the analog measurement of a fluctuating light intensity that arises from a non-Gaussian scattering process is developed. The higher-order statistical moments are derived for a p-i-n diode receiver model and gamma-distributed intensity fluctuations. Criteria for the accurate measurement of the scattering fluctuations are found, and these are used to analyze data derived from an on-line scatterometer system. Implications for future on-line measurement technology are discussed.
Calibrations of the LHD Thomson scattering system
NASA Astrophysics Data System (ADS)
Yamada, I.; Funaba, H.; Yasuhara, R.; Hayashi, H.; Kenmochi, N.; Minami, T.; Yoshikawa, M.; Ohta, K.; Lee, J. H.; Lee, S. H.
2016-11-01
The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.
Rayleigh scattering. [molecular scattering terminology redefined
NASA Technical Reports Server (NTRS)
Young, A. T.
1981-01-01
The physical phenomena of molecular scattering are examined with the objective of redefining the confusing terminology currently used. The following definitions are proposed: molecular scattering consists of Rayleigh and vibrational Raman scattering; the Rayleigh scattering consists of rotational Raman lines and the central Cabannes line; the Cabannes line is composed of the Brillouin doublet and the central Gross or Landau-Placzek line. The term 'Rayleigh line' should never be used.
Accurate Scientific Visualization in Research and Physics Teaching
NASA Astrophysics Data System (ADS)
Wendler, Tim
2011-10-01
Accurate visualization is key in the expression and comprehension of physical principles. Many 3D animation software packages come with built-in numerical methods for a variety of fundamental classical systems. Scripting languages give access to low-level computational functionality, thereby revealing a virtual physics laboratory for teaching and research. Specific examples will be presented: Galilean relativistic hair, energy conservation in complex systems, scattering from a central force, and energy transfer in bi-molecular reactions.
Scattered light mapping of protoplanetary disks
NASA Astrophysics Data System (ADS)
Stolker, T.; Dominik, C.; Min, M.; Garufi, A.; Mulders, G. D.; Avenhaus, H.
2016-12-01
Context. High-contrast scattered light observations have revealed the surface morphology of several dozen protoplanetary disks at optical and near-infrared wavelengths. Inclined disks offer the opportunity to measure part of the phase function of the dust grains that reside in the disk surface which is essential for our understanding of protoplanetary dust properties and the early stages of planet formation. Aims: We aim to construct a method which takes into account how the flaring shape of the scattering surface of an optically thick protoplanetary disk projects onto the image plane of the observer. This allows us to map physical quantities (e.g., scattering radius and scattering angle) onto scattered light images and retrieve stellar irradiation corrected images (r2-scaled) and dust phase functions. Methods: The scattered light mapping method projects a power law shaped disk surface onto the detector plane after which the observed scattered light image is interpolated backward onto the disk surface. We apply the method on archival polarized intensity images of the protoplanetary disk around HD 100546 that were obtained with VLT/SPHERE in the R' band and VLT/NACO in the H and Ks bands. Results: The brightest side of the r2-scaled R' band polarized intensity image of HD 100546 changes from the far to the near side of the disk when a flaring instead of a geometrically flat disk surface is used for the r2-scaling. The decrease in polarized surface brightness in the scattering angle range of 40°-70° is likely a result of the dust phase function and degree of polarization which peak in different scattering angle regimes. The derived phase functions show part of a forward scattering peak, which indicates that large, aggregate dust grains dominate the scattering opacity in the disk surface. Conclusions: Projection effects of a protoplanetary disk surface need to be taken into account to correctly interpret scattered light images. Applying the correct scaling for the
Ultrasound scatter in heterogeneous 3D microstructures
NASA Astrophysics Data System (ADS)
Engle, B. J.; Roberts, R. A.; Grandin, R. J.
2017-02-01
This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.
Ganyecz, Ádám; Kállay, Mihály; Csontos, József
2017-02-09
An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.
Gordon, H R; Brown, J W; Evans, R H
1988-03-01
For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
NASA Astrophysics Data System (ADS)
Mirnov, V. V.; Brower, D. L.; Hartog, D. J. Den; Ding, W. X.; Duff, J.; Parke, E.
2014-11-01
At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec2 model may be insufficient; we present a more precise model with τ2-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.
Compton scattering of blackbody photons by relativistic electrons
NASA Astrophysics Data System (ADS)
Zdziarski, Andrzej A.; Pjanka, Patryk
2013-12-01
We present simple and accurate analytical formulas for the rates of Compton scattering by relativistic electrons integrated over the energy distribution of blackbody seed photons. Both anisotropic scattering, in which blackbody photons arriving from one direction are scattered by an anisotropic electron distribution into another direction, and scattering of isotropic seed photons are considered. Compton scattering by relativistic electrons off blackbody photons from either stars or cosmic microwave background takes place, in particular, in microquasars, colliding-wind binaries, supernova remnants, interstellar medium and the vicinity of the Sun.
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Malcolm, A. A.; McBride, J. W.
2016-01-01
Scattered radiation and beam hardening introduce artefacts that degrade the quality of data in x-ray computed tomography (CT). It is unclear how these artefacts influence dimensional measurements evaluated from CT data. Understanding and quantifying the influence of these artefacts on dimensional measurements is required to evaluate the uncertainty of CT-based dimensional measurements. In this work the influence of scatter and beam hardening on dimensional measurements is investigated using the beam stop array scatter correction method and spectrum pre-filtration for the measurement of an object with internal and external cylindrical dimensional features. Scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, a gradient-based surface determination method is found to be robust to the influence of artefacts and leads to more accurate dimensional measurements than those evaluated using the ISO50 method. In addition to these observations the GUM method for evaluating standard measurement uncertainties is applied and the standard measurement uncertainty due to scatter and beam hardening is estimated.
Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging
Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.
2017-01-01
Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-18
...- Free Treatment Under the Generalized System of Preferences and for Other Purposes Correction In... following correction: On page 407, the date following the proclamation number should read ``December...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... United States-Panama Trade Promotion Agreement and for Other Purposes Correction In Presidential document... correction: On page 66507, the proclamation identification heading on line one should read...
Linking Rayleigh-Rice theory with near linear shift invariance in light scattering phenomena
NASA Astrophysics Data System (ADS)
Stover, John C.; Schroeder, Sven; Staats, Chris; Lopushenko, Vladimir; Church, Eugene
2016-09-01
Understanding topographic scatter has been the subject of many publications. For optically smooth surfaces that scatter only from roughness (and not from contamination, films or bulk defects) the Rayleigh-Rice relationship resulting from a rigorous electromagnetic treatment has been successfully used for over three decades and experimentally proven at wavelengths ranging from the X-Ray to the far infrared (even to radar waves). The "holy grail" of roughness-induced scatter would be a relationship that is not limited to just optically smooth surfaces, but could be used for any surface where the material optical constants and the surface power spectral density function (PSD) are known. Just input these quantities and calculate the BRDF associated with any source incident angle, wavelength and polarization. This is an extremely challenging problem, but that has not stopped a number of attempts. An intuitive requirement on such general relationships is that they must reduce to the simple Rayleigh-Rice formula for sufficiently smooth surfaces. Unfortunately that does not always happen. Because most optically smooth surfaces also scatter from non-topographic features, doubt creeps in about the accuracy of Rayleigh-Rice. This paper investigates these issues and explains some of the confusion generated in recent years. The authors believe there are measurement issues, scatter source issues and rough surface derivation issues, but that Rayleigh- Rice is accurate as formulated and should not be "corrected." Moreover, it will be shown that the empirically observed near shift invariance of surface scatter phenomena is a direct consequence of the Rayleigh-Rice theory.
Quasi-elastic nuclear scattering at high energies
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.
1992-01-01
The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.
Rearrangement and annihilation in antihydrogen-atom scattering
Jonsell, Svante
2008-08-08
I review some results for annihilation and rearrangement processes in low-energy antihydrogen-hydrogen and antihydrogen-helium scattering. For the strong nuclear force results using a {delta}-function potential are compared to a scattering length approach. It is found that the {delta}-function potential does not give correct annihilation cross sections in the case of antihydrogen-helium scattering. Problem associated with the use of the Born-Oppenheimer approximation for rearrangement calculations are reviewed.
Quantitative (177)Lu SPECT imaging using advanced correction algorithms in non-reference geometry.
D'Arienzo, M; Cozzella, M L; Fazio, A; De Felice, P; Iaccarino, G; D'Andrea, M; Ungania, S; Cazzato, M; Schmidt, K; Kimiaei, S; Strigari, L
2016-12-01
Peptide receptor therapy with (177)Lu-labelled somatostatin analogues is a promising tool in the management of patients with inoperable or metastasized neuroendocrine tumours. The aim of this work was to perform accurate activity quantification of (177)Lu in complex anthropomorphic geometry using advanced correction algorithms. Acquisitions were performed on the higher (177)Lu photopeak (208keV) using a Philips IRIX gamma camera provided with medium-energy collimators. System calibration was performed using a 16mL Jaszczak sphere surrounded by non-radioactive water. Attenuation correction was performed using μ-maps derived from CT data, while scatter and septal penetration corrections were performed using the transmission-dependent convolution-subtraction method. SPECT acquisitions were finally corrected for dead time and partial volume effects. Image analysis was performed using the commercial QSPECT software. The quantitative SPECT approach was validated on an anthropomorphic phantom provided with a home-made insert simulating a hepatic lesion. Quantitative accuracy was studied using three tumour-to-background activity concentration ratios (6:1, 9:1, 14:1). For all acquisitions, the recovered total activity was within 12% of the calibrated activity both in the background region and in the tumour. Using a 6:1 tumour-to-background ratio the recovered total activity was within 2% in the tumour and within 5% in the background. Partial volume effects, if not properly accounted for, can lead to significant activity underestimations in clinical conditions. In conclusion, accurate activity quantification of (177)Lu can be obtained if activity measurements are performed with equipment traceable to primary standards, advanced correction algorithms are used and acquisitions are performed at the 208keV photopeak using medium-energy collimators.
Estimating seabed scattering mechanisms via Bayesian model selection.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.
Proximity correction for electron beam lithography
NASA Astrophysics Data System (ADS)
Marrian, Christie R.; Chang, Steven; Peckerar, Martin C.
1996-09-01
As the critical dimensions required in mask making and direct write by electron beam lithography become ever smaller, correction for proximity effects becomes increasingly important. Furthermore, the problem is beset by the fact that only a positive energy dose can be applied with an electron beam. We discuss techniques such as chopping and dose shifting, which have been proposed to meet the positivity requirement. An alternative approach is to treat proximity correction as an optimization problem. Two such methods, local area dose correction and optimization using a regularizer proportional to the informational entropy of the solution, are compared. A notable feature of the regularized proximity correction is the ability to correct for forward scattering by the generation of a 'firewall' set back from the edge of a feature. As the forward scattering width increases, the firewall is set back farther from the feature edge. The regularized optimization algorithm is computationally time consuming using conventional techniques. However, the algorithm lends itself to a microelectronics integrated circuit coprocessor implementation, which could perform the optimization faster than even the fastest work stations. Scaling the circuit to larger number of pixels is best approached with a hybrid serial/parallel digital architecture that would correct for proximity effects over 108 pixels in about 1 h. This time can be reduced by simply adding additional coprocessors.
Proximity correction for e-beam lithography
NASA Astrophysics Data System (ADS)
Marrian, Christie R.; Chang, Steven; Peckerar, Martin C.
1995-12-01
As the critical dimensions required for masks and e-beam direct write become ever smaller, the correction of proximity effects becomes more necessary. Furthermore, the problem is beset by the fact that only a positive energy dose can be applied with the e-beam. We discuss here approaches such as chopping and dose shifting which have been proposed to meet the positivity requirement. An alternative approach is to treat proximity correction as an optimization problem. Two such methods, local area dose correction and optimization using a regularizer proportional to the informational entropy of the solution, are compared. A notable feature of the regularized proximity correction is the ability to correct for forward scattering by the generation of a 'firewall' set back from the edge of a feature. As the forward scattering width increases, the firewall is set back further from the feature edge. The regularized optimization algorithm is computationally time consuming using conventional techniques. However, the algorithm lends itself to a microelectronics integrated circuit coprocessor implementation which could perform the optimization much faster than even the fastest work stations. Scaling the circuit to larger number of pixels is best approached with a hybrid serial/parallel digital architecture which would correct for proximity effects over 108 pixels about one hour. This time can be reduced by simply adding additional coprocessors.
NASA Technical Reports Server (NTRS)
Register, D. F.; Trajmar, S.; Srivastava, S. K.
1980-01-01
Absolute differential, integral, and momentum-transfer cross sections for electrons elastically scattered from helium are reported for the impact energy range of 5 to 200 eV. Angular distributions for elastically scattered electrons are measured in a crossed-beam geometry using a collimated, differentially pumped atomic-beam source which requires no effective-path-length correction. Below the first inelastic threshold the angular distributions were placed on an absolute scale by use of a phase-shift analysis. Above this threshold, the angular distributions from 10 to 140 deg were fitted using the phase-shift technique, and the resulting integral cross sections were normalized to a semiempirically derived integral elastic cross section. Depending on the impact energy, the data are estimated to be accurate to within 5 to 9%.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
Iterative CT shading correction with no prior information.
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A; Niu, Tianye
2015-11-07
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
Lu, Xiaomei; Jiang, Yuesong; Zhang, Xuguo; Lu, Xiaoxia; He, Yuntao
2009-05-25
A new method is proposed to analyze the effects of multiple scattering on simultaneously detected lidar returns for ground-based and space-borne lidars, and it is applied to a Monte Carlo-based simulation to test the feasibility of the new method. The experimental evidence of multiple scattering influences on both ground-based and space-borne lidar returns is presented. Monte Carlo-based evaluations of the multiple scattering parameters for the counter-looking lidar returns are separately obtained in order to correct the effective values of backscattering and extinction coefficients. Results show that for the typical cirrus cloud, the presence of the multiple scattering can lead to an underestimation of the extinction coefficient by as large as 70%, and the backscattering coefficient is overestimated by nearly 10%, which are retrieved by the Counter-propagating Elastic Signals Combination (CESC) technique in which the multiple scattering influences are neglected. Nevertheless, by the new method in which the multiple scattering effects are considered differently for the ground-based and space-borne lidar returns the extinction and backscattering coefficients can be more accurately obtained.
Hanson, J.D.
1994-11-03
Error correction coils are planned for the TPX (Tokamak Plasma Experiment) in order to avoid error field induced locked modes and disruption. The FT (Fix Tokamak) code is used to evaluate the ability of these correction coils to remove islands caused by symmetry breaking magnetic field errors. The proposed correction coils are capable of correcting a variety of error fields.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a (3)He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the (3)He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Filtering and luminance correction for aged photographs
NASA Astrophysics Data System (ADS)
Restrepo, Alfredo; Ramponi, Giovanni
2008-02-01
We virtually restore faded black and white photographic prints by the method of decomposing the image into a smooth component that contains edges and smoothed homogeneous regions, and a rough component that may include grain noise but also fine detail. The decomposition into smooth and rough components is achieved using a rational filter. Two approaches are considered; in one, the smooth component is histogram-stretched and then gamma corrected before being added back to a homomorphically filtered version of the rough component; in the other the image is initially gamma corrected and shifted towards white. Each approach presents improvements with respect to the previously separately explored techniques of gamma correction alone, and the stretching of the smooth component together with the homomorphical filtering of the rough component, alone. After characterizing the image with the help of the scatter plot of a 2D local statistic of the type (local intensity, local contrast), namely (local average, local standard deviation), the effects of gamma correction are studied as the effects on the scatter plot, on the assumption that the quality of the image is related to the distribution of data on the scatter plot. Also, the correlation coefficient between the local average and the local deviation on the one hand, and the global average of the image play important descriptor roles.
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Further corrections to the theory of cosmological recombination
NASA Technical Reports Server (NTRS)
Krolik, Julian H.
1990-01-01
Krolik (1989) pointed out that frequency redistribution due to scattering is more important than cosmological expansion in determining the Ly-alpha frequency profile during cosmological recombination, and that its effects substantially modify the rate of recombination. Although the first statement is true, the second statement is not: a basic symmetry of photon scattering leads to identical cancellations which almost completely erase the effects of both coherent and incoherent scattering. Only a small correction due to atomic recoil alters the line profile from the prediction of pure cosmological expansion, so that the pace of cosmological recombination can be well approximated by ignoring Ly-alpha scattering.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Differential Light Scattering from Spherical Mammalian Cells
Brunsting, Albert; Mullaney, Paul F.
1974-01-01
The differential scattered light intensity patterns of spherical mammalian cells were measured with a new photometer which uses high-speed film as the light detector. The scattering objects, interphase and mitotic Chinese hamster ovary cells and HeLa cells, were modeled as (a) a coated sphere, accounting for nucleus and cytoplasm, and (b) a homogeneous sphere when no cellular nucleus was present. The refractive indices and size distribution of the cells were measured for an accurate comparison of the theoretical model with the light-scattering measurements. The light scattered beyond the forward direction is found to contain information about internal cellular morphology, provided the size distribution of the cells is not too broad. ImagesFIGURE 1 PMID:4134589
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Isospin odd @pK scattering length [rapid communication
NASA Astrophysics Data System (ADS)
Schweizer, J.
2005-10-01
We make use of the chiral two-loop representation of the πK scattering amplitude [J. Bijnens, P. Dhonte, P. Talavera, JHEP 0405 (2004) 036] to investigate the isospin odd scattering length at next-to-next-to-leading order in the SU (3) expansion. This scattering length is protected against contributions of ms in the chiral expansion, in the sense that the corrections to the current algebra result are of order Mπ2. In view of the planned lifetime measurement on πK atoms at CERN it is important to understand the size of these corrections.
MR image intensity inhomogeneity correction
NASA Astrophysics Data System (ADS)
(Vişan Pungǎ, Mirela; Moldovanu, Simona; Moraru, Luminita
2015-01-01
MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered).
ERIC Educational Resources Information Center
di Francia, Giuliano Toraldo
1973-01-01
The art of deriving information about an object from the radiation it scatters was once limited to visible light. Now due to new techniques, much of the modern physical science research utilizes radiation scattering. (DF)
Accurate equilibrium structures for piperidine and cyclohexane.
Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter
2015-03-05
Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.
Accurate, reproducible measurement of blood pressure.
Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W
1990-01-01
The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791
Dense Plasma X-ray Scattering: Methods and Applications
Glenzer, S H; Lee, H J; Davis, P; Doppner, T; Falcone, R W; Fortmann, C; Hammel, B A; Kritcher, A L; Landen, O L; Lee, R W; Munro, D H; Redmer, R; Weber, S
2009-08-19
We have developed accurate x-ray scattering techniques to measure the physical properties of dense plasmas. Temperature and density are inferred from inelastic x-ray scattering data whose interpretation is model-independent for low to moderately coupled systems. Specifically, the spectral shape of the non-collective Compton scattering spectrum directly reflects the electron velocity distribution. In partially Fermi degenerate systems that have been investigated experimentally in laser shock-compressed beryllium, the Compton scattering spectrum provides the Fermi energy and hence the electron density. We show that forward scattering spectra that observe collective plasmon oscillations yield densities in agreement with Compton scattering. In addition, electron temperatures inferred from the dispersion of the plasmon feature are consistent with the ion temperature sensitive elastic scattering feature. Hence, theoretical models of the static ion-ion structure factor and consequently the equation of state of dense matter can be directly tested.
The method of Gaussian weighted trajectories. III. An adiabaticity correction proposal
Bonnet, L.
2008-01-28
The addition of an adiabaticity correction (AC) to the Gaussian weighted trajectory (GWT) method and its normalized version (GWT-N) is suggested. This correction simply consists in omitting vibrationally adiabatic nonreactive trajectories in the calculations of final attributes. For triatomic exchange reactions, these trajectories satisfy the criterion {omega} not much larger than ({Dirac_h}/2{pi}), where {omega} is a vibrational action defined by {omega}={integral}{sup []}-[]dt(pr-p{sub 0}r{sub 0}), r being the reagent diatom bond length, p its conjugate momentum, and r{sub 0} and p{sub 0} the corresponding variables for the unperturbed diatom ({omega}/({Dirac_h}/2{pi}) bears some analogy with the semiclassical elastic scattering phase shift). The resulting GWT-AC and GWT-ACN methods are applied to the recently studied H{sup +}+H{sub 2} and H{sup +}+D{sub 2} reactions and the agreement between their predictions and those of exact quantum scattering calculations is found to be much better than for the initial GWT and GWT-N methods. The GWT-AC method, however, appears to be the most accurate one for the processes considered, in particular, the H{sup +}+D{sub 2} reaction.
The method of Gaussian weighted trajectories. III. An adiabaticity correction proposal
NASA Astrophysics Data System (ADS)
Bonnet, L.
2008-01-01
The addition of an adiabaticity correction (AC) to the Gaussian weighted trajectory (GWT) method and its normalized version (GWT-N) is suggested. This correction simply consists in omitting vibrationally adiabatic nonreactive trajectories in the calculations of final attributes. For triatomic exchange reactions, these trajectories satisfy the criterion Ω not much larger than ℏ, where Ω is a vibrational action defined by Ω =∫-∞∞dt(pṙ0), r being the reagent diatom bond length, p its conjugate momentum, and r0 and p0 the corresponding variables for the unperturbed diatom (Ω /ℏ bears some analogy with the semiclassical elastic scattering phase shift). The resulting GWT-AC and GWT-ACN methods are applied to the recently studied H++H2 and H++D2 reactions and the agreement between their predictions and those of exact quantum scattering calculations is found to be much better than for the initial GWT and GWT-N methods. The GWT-AC method, however, appears to be the most accurate one for the processes considered, in particular, the H++D2 reaction.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Accurate and precise zinc isotope ratio measurements in urban aerosols.
Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly
2008-12-15
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).
Erguel, Ozguer; Guerel, Levent
2008-12-01
We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.
Accurate Fiber Length Measurement Using Time-of-Flight Technique
NASA Astrophysics Data System (ADS)
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.
ERIC Educational Resources Information Center
Zuckerman, June Trop
This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…
Li, Z. P.; Hillhouse, G. C.; Meng, J.
2008-07-15
We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.
Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.
2010-10-12
An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Atmospheric monitoring in MAGIC and data corrections
NASA Astrophysics Data System (ADS)
Fruck, Christian; Gaug, Markus
2015-03-01
A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.
Grimmer, Rainer; Kachelriess, Marc
2011-04-15
Purpose: Scatter and beam hardening are prominent artifacts in x-ray CT. Currently, there is no precorrection method that inherently accounts for tube voltage modulation and shaped prefiltration. Methods: A method for self-calibration based on binary tomography of homogeneous objects, which was proposed by B. Li et al. [''A novel beam hardening correction method for computed tomography,'' in Proceedings of the IEEE/ICME International Conference on Complex Medical Engineering CME 2007, pp. 891-895, 23-27 May 2007], has been generalized in order to use this information to preprocess scans of other, nonbinary objects, e.g., to reduce artifacts in medical CT applications. Further on, the method was extended to handle scatter besides beam hardening and to allow for detector pixel-specific and ray-specific precorrections. This implies that the empirical binary tomography calibration (EBTC) technique is sensitive to spectral effects as they are induced by the heel effect, by shaped prefiltration, or by scanners with tube voltage modulation. The presented method models the beam hardening correction by using a rational function, while the scatter component is modeled using the pep model of B. Ohnesorge et al. [''Efficient object scatter correction algorithm for third and fourth generation CT scanners,'' Eur. Radiol. 9(3), 563-569 (1999)]. A smoothness constraint is applied to the parameter space to regularize the underdetermined system of nonlinear equations. The parameters determined are then used to precorrect CT scans. Results: EBTC was evaluated using simulated data of a flat panel cone-beam CT scanner with tube voltage modulation and bow-tie prefiltration and using real data of a flat panel cone-beam CT scanner. In simulation studies, where the ground truth is known, the authors' correction model proved to be highly accurate and was able to reduce beam hardening by 97% and scatter by about 75%. Reconstructions of measured data showed significantly less artifacts than
Aberration correction for time-domain ultrasound diffraction tomography.
Mast, T Douglas
2002-07-01
Extensions of a time-domain diffraction tomography method, which reconstructs spatially dependent sound speed variations from far-field time-domain acoustic scattering measurements, are presented and analyzed. The resulting reconstructions are quantitative images with applications including ultrasonic mammography, and can also be considered candidate solutions to the time-domain inverse scattering problem. Here, the linearized time-domain inverse scattering problem is shown to have no general solution for finite signal bandwidth. However, an approximate solution to the linearized problem is constructed using a simple delay-and-sum method analogous to "gold standard" ultrasonic beamforming. The form of this solution suggests that the full nonlinear inverse scattering problem can be approximated by applying appropriate angle- and space-dependent time shifts to the time-domain scattering data; this analogy leads to a general approach to aberration correction. Two related methods for aberration correction are presented: one in which delays are computed from estimates of the medium using an efficient straight-ray approximation, and one in which delays are applied directly to a time-dependent linearized reconstruction. Numerical results indicate that these correction methods achieve substantial quality improvements for imaging of large scatterers. The parametric range of applicability for the time-domain diffraction tomography method is increased by about a factor of 2 by aberration correction.
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Multiple scattering technique lidar
NASA Technical Reports Server (NTRS)
Bissonnette, Luc R.
1992-01-01
The Bernouilli-Ricatti equation is based on the single scattering description of the lidar backscatter return. In practice, especially in low visibility conditions, the effects of multiple scattering can be significant. Instead of considering these multiple scattering effects as a nuisance, we propose here to use them to help resolve the problems of having to assume a backscatter-to-extinction relation and specifying a boundary value for a position far remote from the lidar station. To this end, we have built a four-field-of-view lidar receiver to measure the multiple scattering contributions. The system has been described in a number of publications that also discuss preliminary results illustrating the multiple scattering effects for various environmental conditions. Reported here are recent advances made in the development of a method of inverting the multiple scattering data for the determination of the aerosol scattering coefficient.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-13
... Day: A National Day of Celebration of Greek and American Democracy, 2010 Correction In Presidential... correction: On page 15601, the first line of the heading should read ``Proclamation 8485 of March 24,...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... Respect to the Former Liberian Regime of Charles Taylor Correction In Presidential document 2012-17703 beginning on page 42415 in the issue of Wednesday, July 18, 2012, make the following correction: On...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-01
... Unobligated Funds Under the American Recovery and Reinvestment Act of 2009 Correction In Presidential document... correction: On page 70883, the document identification heading on line one should read ``Notice of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Correction In Presidential document 2010-27676 beginning on page 67019 in the issue of Monday, November 1, 2010, make the following correction: On page 67019, the Presidential Determination number should...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-08
... Correction In Presidential document E9-31418 beginning on page 707 in the issue of Tuesday, January 5, 2010, make the following correction: On page 731, the date line below the President's signature should...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Migration Needs Resulting From Flooding In Pakistan Correction In Presidential document 2010-27673 beginning on page 67015 in the issue of Monday, November 1, 2010, make the following correction: On page...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
...--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction In Presidential... correction: On page 51647, the heading of the document was omitted and should read ``Continuation of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... Commit, Threaten To Commit, or Support Terrorism Correction In Presidential document 2012-22710 beginning on page 56519 in the issue of Wednesday, September 12, 2012, make the following correction: On...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Migration Needs Resulting from Violence in Kyrgyzstan Correction In Presidential document 2010-27672 beginning on page 67013 in the issue of Monday, November 1, 2010, make the following correction: On...
Fitzpatrick, A.Liam; Kaplan, Jared; /SLAC
2012-02-14
We show that suitably regulated multi-trace primary states in large N CFTs behave like 'in' and 'out' scattering states in the flat-space limit of AdS. Their transition matrix elements approach the exact scattering amplitudes for the bulk theory, providing a natural CFT definition of the flat space S-Matrix. We study corrections resulting from the AdS curvature and particle propagation far from the center of AdS, and show that AdS simply provides an IR regulator that disappears in the flat space limit.
[From scattering to wavefront. Healing optics].
Semchishen, V; Mrokhen, M
2004-01-01
The purpose of this report, made within research in progress, was to discuss the optical effect of irregular surface structures that might be associated with complicated refractive procedures related with the retinal image quality. We concentrated our discussion on the range of surface structures between the known scattering effects and wavefront aberrations of higher orders. The case study demonstrates that the surface irregularities of the cornea might induce, after refractive laser surgery, significant optical aberrations that are much too different from the classical wavevront or scattering errors. Such optical errors, however, cannot be correctly measured by current commercial wavefront sensors. Finally, the influence of the healing process on Strehl ratio is under discussion.
Timelike Compton Scattering - A First Look (CLAS)
Pawel Nadel-Turonski
2009-12-01
A major goal of the 12 GeV upgrade at Jefferson Lab is to map out the Generalized Parton Distributions (GPDs) in the valence region. This is primarily done through Deeply Virtual Compton Scattering (DVCS), which provides the simplest and cleanest way of accessing the GPDs. However, the “inverse” process, Timelike Compton Scattering (TCS), can provide an important complement, in particular formeasuring the real part of the amplitude and understanding corrections at finite Q2. The first measurements of TCS have recently been carried out in Hall B at Jefferson Lab, using both tagged and untagged photon beams.
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; ...
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLO_{sat}, yield accurate binding energies and radii of nuclei up to ^{40}Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective J^{π}=3^{-} states in ^{16}O and ^{40}Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
NASA Astrophysics Data System (ADS)
Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.
2015-05-01
Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional
The history of scatter hoarding studies
Brodin, Anders
2010-01-01
In this review, I will present an overview of the development of the field of scatter hoarding studies. Scatter hoarding is a conspicuous behaviour and it has been observed by humans for a long time. Apart from an exceptional experimental study already published in 1720, it started with observational field studies of scatter hoarding birds in the 1940s. Driven by a general interest in birds, several ornithologists made large-scale studies of hoarding behaviour in species such as nutcrackers and boreal titmice. Scatter hoarding birds seem to remember caching locations accurately, and it was shown in the 1960s that successful retrieval is dependent on a specific part of the brain, the hippocampus. The study of scatter hoarding, spatial memory and the hippocampus has since then developed into a study system for evolutionary studies of spatial memory. In 1978, a game theoretical paper started the era of modern studies by establishing that a recovery advantage is necessary for individual hoarders for the evolution of a hoarding strategy. The same year, a combined theoretical and empirical study on scatter hoarding squirrels investigated how caches should be spaced out in order to minimize cache loss, a phenomenon sometimes called optimal cache density theory. Since then, the scatter hoarding paradigm has branched into a number of different fields: (i) theoretical and empirical studies of the evolution of hoarding, (ii) field studies with modern sampling methods, (iii) studies of the precise nature of the caching memory, (iv) a variety of studies of caching memory and its relationship to the hippocampus. Scatter hoarding has also been the subject of studies of (v) coevolution between scatter hoarding animals and the plants that are dispersed by these. PMID:20156813
Research in Correctional Rehabilitation.
ERIC Educational Resources Information Center
Rehabilitation Services Administration (DHEW), Washington, DC.
Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…
Shading correction for on-board cone-beam CT in radiation therapy using planning MDCT images
Niu Tianye; Sun, Mingshan; Star-Lack, Josh; Gao Hewei; Fan Qiyong; Zhu Lei
2010-10-15
Purpose: Applications of cone-beam CT (CBCT) to image-guided radiation therapy (IGRT) are hampered by shading artifacts in the reconstructed images. These artifacts are mainly due to scatter contamination in the projections but also can result from uncorrected beam hardening effects as well as nonlinearities in responses of the amorphous silicon flat panel detectors. While currently, CBCT is mainly used to provide patient geometry information for treatment setup, more demanding applications requiring high-quality CBCT images are under investigation. To tackle these challenges, many CBCT correction algorithms have been proposed; yet, a standard approach still remains unclear. In this work, we propose a shading correction method for CBCT that addresses artifacts from low-frequency projection errors. The method is consistent with the current workflow of radiation therapy. Methods: With much smaller inherent scatter signals and more accurate detectors, diagnostic multidetector CT (MDCT) provides high quality CT images that are routinely used for radiation treatment planning. Using the MDCT image as ''free'' prior information, we first estimate the primary projections in the CBCT scan via forward projection of the spatially registered MDCT data. Since most of the CBCT shading artifacts stem from low-frequency errors in the projections such as scatter, these errors can be accurately estimated by low-pass filtering the difference between the estimated and raw CBCT projections. The error estimates are then subtracted from the raw CBCT projections. Our method is distinct from other published correction methods that use the MDCT image as a prior because it is projection-based and uses limited patient anatomical information from the MDCT image. The merit of CBCT-based treatment monitoring is therefore retained. Results: The proposed method is evaluated using two phantom studies on tabletop systems. On the Catphan(c)600 phantom, our approach reduces the reconstruction error
NASA Astrophysics Data System (ADS)
Ouyang, Wei; Mao, Weijian; Li, Wuqun; Zhang, Pan
2016-11-01
An approach for approximate direct quadratic nonlinear inversion in two-parameter (density and bulk modulus) heterogeneous acoustic media is being presented and discussed in this paper. The approach consists of two parts: the first is a linear generalized Radon transform (GRT) migration procedure based on the weighted true-amplitude summation of pre-stack seismic scattered data that is adapted to a virtually arbitrary observing system, and the second is a non-iterative quadratic inversion operation, produced from the explicit expression of amplitude radiation pattern that is acting on the migrated data. This ensures the asymptotic inversion can continue to simultaneously locate the discontinuities and reconstruct the size of the discontinuities in the perturbation parameters describing the acoustic media. We identify that the amplitude radiation pattern is the binary quadratic combination of the parameters in the process of formulating nonlinear inverse scattering problems based on second-order Born approximation. The coefficients of the quadratic terms are computed by appropriately handling the double scattering effects. These added quadratic terms provide a better amplitude correction for the parameters inversion. Through numerical tests, we show that for strong perturbations, the errors of the linear inversion are significant and unacceptable. In contrast, the quadratic nonlinear inversion can give fairly accurate inversion results and keep almost the same computational complexity as conventional GRT liner inversion.
NASA Astrophysics Data System (ADS)
Ouyang, Wei; Mao, Weijian; Li, Wuqun; Zhang, Pan
2017-02-01
An approach for approximate direct quadratic non-linear inversion in two-parameter (density and bulk modulus) heterogeneous acoustic media is being presented and discussed in this paper. The approach consists of two parts: the first is a linear generalized Radon transform (GRT) migration procedure based on the weighted true-amplitude summation of pre-stack seismic scattered data that is adapted to a virtually arbitrary observing system, and the second is a non-iterative quadratic inversion operation, produced from the explicit expression of amplitude radiation pattern that is acting on the migrated data. This ensures the asymptotic inversion can continue to simultaneously locate the discontinuities and reconstruct the size of the discontinuities in the perturbation parameters describing the acoustic media. We identify that the amplitude radiation pattern is the binary quadratic combination of the parameters in the process of formulating non-linear inverse scattering problems based on second-order Born approximation. The coefficients of the quadratic terms are computed by appropriately handling the double scattering effects. These added quadratic terms provide a better amplitude correction for the parameters inversion. Through numerical tests, we show that for strong perturbations, the errors of the linear inversion are significant and unacceptable. In contrast, the quadratic non-linear inversion can give fairly accurate inversion results and keep almost the same computational complexity as conventional GRT liner inversion.
NASA Astrophysics Data System (ADS)
Wu, Li-Li; Zhou, Qihou H.; Chen, Tie-Jun; Liang, J. J.; Wu, Xin
2015-09-01
Simultaneous derivation of multiple ionospheric parameters from the incoherent scatter power spectra in the F1 region is difficult because the spectra have only subtle differences for different combinations of parameters. In this study, we apply a particle swarm optimizer (PSO) to incoherent scatter power spectrum fitting and compare it to the commonly used least squares fitting (LSF) technique. The PSO method is found to outperform the LSF method in practically all scenarios using simulated data. The PSO method offers the advantages of not being sensitive to initial assumptions and allowing physical constraints to be easily built into the model. When simultaneously fitting for molecular ion fraction (fm), ion temperature (Ti), and ratio of ion to electron temperature (γT), γT is largely stable. The uncertainty between fm and Ti can be described as a quadratic relationship. The significance of this result is that Ti can be retroactively corrected for data archived many years ago where the assumption of fm may not be accurate, and the original power spectra are unavailable. In our discussion, we emphasize the fitting for fm, which is a difficult parameter to obtain. PSO method is often successful in obtaining fm, whereas LSF fails. We apply both PSO and LSF to actual observations made by the Arecibo incoherent scatter radar. The results show that PSO method is a viable method to simultaneously determine ion and electron temperatures and molecular ion fraction when the last is greater than 0.3.
NASA Astrophysics Data System (ADS)
Fitzpatrick, A. Liam; Kaplan, Jared
2016-05-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT2 at large central charge c. The Lyapunov exponent λ L , which is a diagnostic for the early onset of chaos, receives 1 /c corrections that may be interpreted as {λ}_L=2π /β(1+12/c) . However, out of time order correlators receive other equally important 1 /c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ L that emerges at large c, focusing on CFT2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Expressive Single Scattering for Light Shaft Stylization.
Kol, Timothy R; Klehm, Oliver; Seidel, Hans-Peter; Eisemann, Elmar
2016-04-14
Light scattering in participating media is a natural phenomenon that is increasingly featured in movies and games, as it is visually pleasing and lends realism to a scene. In art, it may further be used to express a certain mood or emphasize objects. Here, artists often rely on stylization when creating scattering effects, not only because of the complexity of physically correct scattering, but also to increase expressiveness. Little research, however, focuses on artistically influencing the simulation of the scattering process in a virtual 3D scene. We propose novel stylization techniques, enabling artists to change the appearance of single scattering effects such as light shafts. Users can add, remove, or enhance light shafts using occluder manipulation. The colors of the light shafts can be stylized and animated using easily modifiable transfer functions. Alternatively, our system can optimize a light map given a simple user input for a number of desired views in the 3D world. Finally, we enable artists to control the heterogeneity of the underlying medium. Our stylized scattering solution is easy to use and compatible with standard rendering pipelines. It works for animated scenes and can be executed in real time to provide the artist with quick feedback.
Intensity distribution and impact of scatter for dual-source CT
NASA Astrophysics Data System (ADS)
Kyriakou, Yiannis; Kalender, Willi A.
2007-12-01
Apart from forward scatter, which is given for all CT scanners, dual-source CT (DSCT) is also affected by cross-scatter photons from the second tube-detector system arranged at 90°. We investigated the magnitude and distribution of scatter for DSCT and its impact on image quality. Simulations and measurements of homogeneous and anthropomorphic phantoms were conducted for a DSCT scanner (SOMATOM Definition, Siemens Medical Solutions, Forchheim, Germany) at tube voltages of 80 and 120 kV. The simulations of forward scatter were carried out using combined analytical and Monte Carlo simulation methods for a collimation of 19.2 mm for both tube-detector systems. Measurements of cross scatter were performed by switching one tube off, still reading out the corresponding detector. The relative scatter fractions and the distribution of cross scatter were registered for various imaging conditions. Additionally, a detailed noise analysis with respect to the correction of cross-scatter artifacts is provided to evaluate the performance of correction algorithms. The forward-scatter fraction increased with increasing phantom diameter from 0.02 up to 0.11 for PMMA phantoms of 80 to 400 mm diameter. For cross scatter, the mean intensity was equivalent to forward scatter for small phantoms but was larger for increased phantom size and resulted in severe artifacts in the reconstructed images. The outer dimensions and shape of the object are decisive for the cross-scatter intensity distribution whereas the influence of the degree of inhomogeneity of the respective phantom appears to be negligible. Scatter correction suppressed cross-scatter artifacts but increased noise as a function of the cross-scatter fraction. The magnitude of scatter is not negligible for DSCT systems and dedicated corrections are necessary for the assurance of unimpaired image quality.
Correction method for shift-variant characteristics of the SPECT measurement system
NASA Astrophysics Data System (ADS)
Mimura, Masahiro; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
1997-04-01
SPECT imaging system has shift-variant characteristics due to nonuniform attenuation of gamma-ray, collimator design, scattered photons, etc. In order to provide quantitatively accurate SPECT images, these shift-variant characteristics should be compensated in reconstruction. This paper presents a method to correct the shift-variant characteristics based on a continuous-discrete mapping model. In the proposed method, the projection data are modified using sensitivity functions so that filtered backprojection (FBP) method can be applied. Since the projection data are assumed to be acquired by narrow ray sum beams in the FBP method, narrow ray sum beams are approximated by a weighted sum of sensitivity functions of the measurement system, then the actual projection data are corrected by the weighting factors. Finally, FBP method is applied to the corrected projection data and a SPECT image is reconstructed. Since the proposed method requires the inversion of smaller matrices than the conventional algebraic methods, the amounts of calculation and memory space become smaller, and the stability of the calculation is greatly improved as well. The results of the numerical simulations are also demonstrated.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Ocean color determination through a scattering atmosphere
NASA Technical Reports Server (NTRS)
Curran, R. J.
1972-01-01
Measurements made of the surface level albedo for ocean water containing various concentrations of phytoplankton indicate a strong correlation between wavelength dependent albedo ratios and phytoplankton chlorophyll concentration. To sense surface level albedo ratios from space platforms it is necessary to correct for the scattering and absorption properties of the atmosphere for the wavelengths in question. Atmospheric scattering models were constructed to calculate corrections at two wavelengths, 0.46 and 0.54 millimicrons. Assuming a natural background uncertainty in the aerosol optical depth of 0.1, it is found that the chlorophyll concentration may be determined to within one standard deviation of from 0.5 to 2.5 milligrams per cubic meter. By remotely sensing the aerosol optical depth to a greater accuracy it appears feasible to detect chlorophyll concentrations to uncertainty approaching 0.1 milligram per cubic meter.
Photon diffusion near the point-of-entry in anisotropically scattering turbid media
Vitkin, Edward; Turzhitsky, Vladimir; Qiu, Le; Guo, Lianyu; Itzkan, Irving; Hanlon, Eugene B.; Perelman, Lev T.
2012-01-01
From astronomy to cell biology, the manner in which light propagates in turbid media has been of central importance for many decades. However, light propagation near the point-of-entry (POE) in turbid media has never been analytically described, until now. Here we report a straightforward and accurate method that overcomes this longstanding, unsolved problem in radiative transport. Our theory properly treats anisotropic photon scattering events and takes the specific form of the phase function into account. As a result, our method correctly predicts the spatially dependent diffuse reflectance of light near the POE for any arbitrary phase function. We demonstrate that the theory is in excellent agreement with both experimental results and Monte Carlo simulations for several commonly used phase functions. PMID:22158442
Partially strong WW scattering
Cheung Kingman; Chiang Chengwei; Yuan Tzuchiang
2008-09-01
What if only a light Higgs boson is discovered at the CERN LHC? Conventional wisdom tells us that the scattering of longitudinal weak gauge bosons would not grow strong at high energies. However, this is generally not true. In some composite models or general two-Higgs-doublet models, the presence of a light Higgs boson does not guarantee complete unitarization of the WW scattering. After partial unitarization by the light Higgs boson, the WW scattering becomes strongly interacting until it hits one or more heavier Higgs bosons or other strong dynamics. We analyze how LHC experiments can reveal this interesting possibility of partially strong WW scattering.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.
1989-01-01
The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Improving accuracy through density correction in guided wave tomography
2016-01-01
The accurate quantification of wall loss caused by corrosion is critical to the reliable life estimation of pipes and pressure vessels. Traditional thickness gauging by scanning a probe is slow and requires access to all points on the surface; this is impractical in many cases as corrosion often occurs where access is restricted, such as beneath supports where water collects. Guided wave tomography presents a solution to this; by transmitting guided waves through the region of interest and exploiting their dispersive nature, it is possible to build up a map of thickness. While the best results have been seen when using the fundamental modes A0 and S0 at low frequency, the complex scattering of the waves causes errors within the reconstruction. It is demonstrated that these lead to an underestimate in wall loss for A0 but an overestimate for S0. Further analysis showed that this error was related to density variation, which was proportional to thickness. It was demonstrated how this could be corrected for in the reconstructions, in many cases resulting in the near-elimination of the error across a range of defects, and greatly improving the accuracy of life estimates from guided wave tomography. PMID:27118904
NASA Astrophysics Data System (ADS)
Tseng, Snow H.; Kung, Te-Jen; Yu, Min-Lun
2016-03-01
By means of numerical solutions of Maxwell's equations, we model the complex light scattering phenomenon. Light propagation through scattering medium is a deterministic process; with specific amplitude and phase, light can propagate to the target position via multiple scattering. By means of numerical solutions of Maxwell's equations, the complex light scattering phenomenon can be accurately analyzed. The reported simulation enables qualitative and quantitative analyses of the effectiveness of directing light through turbid media to a targeted position
Light-like scattering in quantum gravity
NASA Astrophysics Data System (ADS)
Bjerrum-Bohr, N. E. J.; Donoghue, John F.; Holstein, Barry R.; Planté, Ludovic; Vanhove, Pierre
2016-11-01
We consider scattering in quantum gravity and derive long-range classical and quantum contributions to the scattering of light-like bosons and fermions (spin-0, spin- 1/2 , spin-1) from an external massive scalar field, such as the Sun or a black hole. This is achieved by treating general relativity as an effective field theory and identifying the non-analytic pieces of the one-loop gravitational scattering amplitude. It is emphasized throughout the paper how modern amplitude techniques, involving spinor-helicity variables, unitarity, and squaring relations in gravity enable much simplified computations. We directly verify, as predicted by general relativity, that all classical effects in our computation are universal (in the context of matter type and statistics). Using an eikonal procedure we confirm the post-Newtonian general relativity correction for light-like bending around large stellar objects. We also comment on treating effects from quantum ℏ dependent terms using the same eikonal method.
Neutrons scattering studies in the actinide region
Kegel, G.H.R.; Egan, J.J.
1992-09-01
During the report period were investigated the following areas: prompt fission neutron energy spectra measurements; neutron elastic and inelastic scattering from [sup 239]Pu; neutron scattering in [sup 181]Ta and [sup 197]Au; response of a [sup 235]U fission chamber near reaction thresholds; two-parameter data acquisition system; black'' neutron detector; investigation of neutron-induced defects in silicon dioxide; and multiple scattering corrections. Four Ph.D. dissertations and one M.S. thesis were completed during the report period. Publications consisted of three journal articles, four conference papers in proceedings, and eleven abstracts of presentations at scientific meetings. There are currently four Ph.D. and one M.S. candidates working on dissertations directly associated with the project. In addition, three other Ph.D. candidates are working on dissertations involving other aspects of neutron physics in this laboratory.
Shuttle program: Computing atmospheric scale height for refraction corrections
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.
A Review of Target Mass Corrections
I. Schienbein; V. Radescu; G. Zeller; M. E. Christy; C. E. Keppel; K. S. McFarland; W. Melnitchouk; F. I. Olness; M. H. Reno; F. Steffens; J.-Y. Yu
2007-09-06
With recent advances in the precision of inclusive lepton-nuclear scattering experiments, it has become apparent that comparable improvements are needed in the accuracy of the theoretical analysis tools. In particular, when extracting parton distribution functions in the large-x region, it is crucial to correct the data for effects associated with the nonzero mass of the target. We present here a comprehensive review of these target mass corrections (TMC) to structure functions data, summarizing the relevant formulas for TMCs in electromagnetic and weak processes. We include a full analysis of both hadronic and partonic masses, and trace how these effects appear in the operator product expansion and the factorized parton model formalism, as well as their limitations when applied to data in the x -> 1 limit. We evaluate the numerical effects of TMCs on various structure functions, and compare fits to data with and without these corrections.
Zeno: Critical Fluid Light Scattering Experiment
NASA Technical Reports Server (NTRS)
Gammon, Robert W.; Shaumeyer, J. N.; Briggs, Matthew E.; Boukari, Hacene; Gent, David A.; Wilkinson, R. Allen
1996-01-01
The Zeno (Critical Fluid Light Scattering) experiment is the culmination of a long history of critical fluid light scattering in liquid-vapor systems. The major limitation to making accurate measurements closer to the critical point was the density stratification which occurs in these extremely compressible fluids. Zeno was to determine the critical density fluctuation decay rates at a pair of supplementary angles in the temperature range 100 mK to 100 (mu)K from T(sub c) in a sample of xenon accurately loaded to the critical density. This paper gives some highlights from operating the instrument on two flights March, 1994 on STS-62 and February, 1996 on STS-75. More detail of the experiment Science Requirements, the personnel, apparatus, and results are displayed on the Web homepage at http://www.zeno.umd.edu.
SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction
Siman, W; Kappadath, S
2014-06-01
Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECT images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime
WEBSCAT: A web application for the analysis of electromagnetic scattering from small particles
NASA Astrophysics Data System (ADS)
Gogoi, Ankur; Rajkhowa, Pritom; P. Saikia, Gunjan; Ahmed, Gazi A.; Choudhury, Amarjyoti
2014-10-01
Development of an online web application to simulate and display plane wave scattering from small particles is presented. In particular, the computation of angular variation of the scattering properties (scattering matrix elements, scattering coefficients, single scattering albedo etc.) of particulate matter by using the Mie theory and the T-matrix method was incorporated in the application. Comparison of the results generated by using the web application with other reported benchmark results has shown that the web application is accurate and reliable for electromagnetic scattering computations.
Focusing of particles scattered by a surface
NASA Astrophysics Data System (ADS)
Babenko, P. Yu.; Zinov'ev, A. N.; Shergin, A. P.
2015-06-01
It has been shown by computer simulation that the coefficient of reflection of argon atoms scattered by crystalline aluminum and germanium targets at glancing angles of less than 4° is close to unity and the beam of scattered particles exhibits focusing (the angular distributions of particles are strongly compressed). Whereas beam focusing with respect to the azimuth is well known and has already been studied, sharp focusing in the surface-normal direction at small glancing angles has not been studied so far. This effect is confirmed by the experimental results. It is associated with multiple scattering of incident particles by the atomic chain. The simulation results allowed finding quite accurately the amplitude of thermal vibrations of surface atoms ((0.123 ± 0.007) Å for aluminum), which agrees well with the experiment.
Thermal insulator transition induced by interface scattering
NASA Astrophysics Data System (ADS)
Slovick, Brian A.; Krishnamurthy, Srini
2016-10-01
We develop an effective medium model of thermal conductivity that accounts for both percolation and interface scattering. This model accurately explains the measured increase and decrease of thermal conductivity with loading in composites dominated by percolation and interface scattering, respectively. Our model further predicts that strong interface scattering leads to a sharp decrease in thermal conductivity, or an insulator transition, at high loadings when conduction through the matrix is restricted and heat is forced to diffuse through particles with large interface resistance. The accuracy of our model and its ability to predict transitions between insulating and conducting states suggest it can be a useful tool for designing materials with low or high thermal conductivity for a variety of applications.
Scattered light in the STIS echelle modes
NASA Technical Reports Server (NTRS)
Landsman, W.; Bowers, C.
1997-01-01
The Space Telescope Imaging Spectrograph (STIS) echelle spectra obtained during the Early Release Observations have non-zero residuals in the cores of saturated interstellar lines, indicating the need for a scattered light correction. A rough measure of the magnitude of the needed correction shows the ratio of the interorder to the in-order flux in different echelle modes in both pre-launch calibration images of a continuum lamp source and in post-launch images of stellar continuum sources. The interorder and in-order fluxes are computed by averaging the central 200 pixels in the dispersion direction. The amount of scattered light in the interorder region rises toward shorter wavelengths for two reasons: (1) the order separation decreases toward shorter wavelengths; and (2) the amount of echelle scattering is expected to have an inverse dependence on wavelength. At the shortest wavelengths the fraction of light scattered into the interorder region can be 10% for the Near-ultraviolet-Multi-Anode Microchannel Array (NUV-MAMA) and 15% for the Far-ultraviolet-Multi-Anode Microchannel Array (FUV-MAMA).
NASA Astrophysics Data System (ADS)
Stover, John C.
1991-12-01
Optical scatter is a bothersome source of optical noise, limits resolution and reduces system throughput. However, it is also an extremely sensitive metrology tool. It is employed in a wide variety of applications in the optics industry (where direct scatter measurement is of concern) and is becoming a popular indirect measurement in other industries where its measurement in some form is an indicator of another component property - like roughness, contamination or position. This paper presents a brief review of the current state of this technology as it emerges from university and government laboratories into more general industry use. The bidirectional scatter distribution function (or BSDF) has become the common format for expressing scatter data and is now used almost universally. Measurements made at dozens of laboratories around the country cover the spectrum from the uv to the mid- IR. Data analysis of optical component scatter has progressed to the point where a variety of analysis tools are becoming available for discriminating between the various sources of scatter. Work has progressed on the analysis of rough surface scatter and the application of these techniques to some challenging problems outside the optical industry. Scatter metrology is acquiring standards and formal test procedures. The available scatter data base is rapidly expanding as the number and sophistication of measurement facilities increases. Scatter from contaminants is continuing to be a major area of work as scatterometers appear in vacuum chambers at various laboratories across the country. Another area of research driven by space applications is understanding the non-topographic sources of mid-IR scatter that are associated with Beryllium and other materials. The current flurry of work in this growing area of metrology can be expected to continue for several more years and to further expand to applications in other industries.
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey
2011-01-01
This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
Letter from Jeff Rush requesting rescinding and correction online and printed information regarding alleged greenhouse gas emissions reductions resulting from beneficial use of coal combustion waste products.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... Commodities and Services From Any Agency of the United States Government to the Syrian Opposition Coalition (SOC) and the Syrian Opposition's Supreme Military Council (SMC) Correction In Presidential...
Acquisition of accurate data from intramolecular quenched fluorescence protease assays.
Arachea, Buenafe T; Wiener, Michael C
2017-04-01
The Intramolecular Quenched Fluorescence (IQF) protease assay utilizes peptide substrates containing donor-quencher pairs that flank the scissile bond. Following protease cleavage, the dequenched donor emission of the product is subsequently measured. Inspection of the IQF literature indicates that rigorous treatment of systematic errors in observed fluorescence arising from inner-filter absorbance (IF) and non-specific intermolecular quenching (NSQ) is incompletely performed. As substrate and product concentrations vary during the time-course of enzyme activity, iterative solution of the kinetic rate equations is, generally, required to obtain the proper time-dependent correction to the initial velocity fluorescence data. Here, we demonstrate that, if the IQF assay is performed under conditions where IF and NSQ are approximately constant during the measurement of initial velocity for a given initial substrate concentration, then a simple correction as a function of initial substrate concentration can be derived and utilized to obtain accurate initial velocity data for analysis.
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Purely bianisotropic scatterers
NASA Astrophysics Data System (ADS)
Albooyeh, M.; Asadchy, V. S.; Alaee, R.; Hashemi, S. M.; Yazdi, M.; Mirmoosa, M. S.; Rockstuhl, C.; Simovski, C. R.; Tretyakov, S. A.
2016-12-01
The polarization response of molecules or meta-atoms to external electric and magnetic fields, which defines the electromagnetic properties of materials, can either be direct (electric field induces electric moment and magnetic field induces magnetic moment) or indirect (magnetoelectric coupling in bianisotropic scatterers). Earlier studies suggest that there is a fundamental bound on the indirect response of all passive scatterers: It is believed to be always weaker than the direct one. In this paper, we prove that there exist scatterers which overcome this bound substantially. Moreover, we show that the amplitudes of electric and magnetic polarizabilities can be negligibly small as compared to the magnetoelectric coupling coefficients. However, we prove that if at least one of the direct-excitation coefficients vanishes, magnetoelectric coupling effects in passive scatterers cannot exist. Our findings open a way to a new class of electromagnetic scatterers and composite materials.
Inelastic Light Scattering Processes
NASA Technical Reports Server (NTRS)
Fouche, Daniel G.; Chang, Richard K.
1973-01-01
Five different inelastic light scattering processes will be denoted by, ordinary Raman scattering (ORS), resonance Raman scattering (RRS), off-resonance fluorescence (ORF), resonance fluorescence (RF), and broad fluorescence (BF). A distinction between fluorescence (including ORF and RF) and Raman scattering (including ORS and RRS) will be made in terms of the number of intermediate molecular states which contribute significantly to the scattered amplitude, and not in terms of excited state lifetimes or virtual versus real processes. The theory of these processes will be reviewed, including the effects of pressure, laser wavelength, and laser spectral distribution on the scattered intensity. The application of these processes to the remote sensing of atmospheric pollutants will be discussed briefly. It will be pointed out that the poor sensitivity of the ORS technique cannot be increased by going toward resonance without also compromising the advantages it has over the RF technique. Experimental results on inelastic light scattering from I(sub 2) vapor will be presented. As a single longitudinal mode 5145 A argon-ion laser line was tuned away from an I(sub 2) absorption line, the scattering was observed to change from RF to ORF. The basis, of the distinction is the different pressure dependence of the scattered intensity. Nearly three orders of magnitude enhancement of the scattered intensity was measured in going from ORF to RF. Forty-seven overtones were observed and their relative intensities measured. The ORF cross section of I(sub 2) compared to the ORS cross section of N2 was found to be 3 x 10(exp 6), with I(sub 2) at its room temperature vapor pressure.
Characterization of Platelet Concentrates Using Dynamic Light Scattering
Labrie, Audrey; Marshall, Andrea; Bedi, Harjot; Maurer-Spurej, Elisabeth
2013-01-01
Summary Background Each year, millions of platelet transfusions save the lives of cancer patients and patients with bleeding complications. However, between 10 and 30% of all platelet transfusions are clinically ineffective as measured by corrected count increments, but no test is currently used to identify and avoid these transfusions. ThromboLUX® is the first platelet test intended to routinely characterize platelet concentrates prior to transfusion. Methods ThromboLUX is a non-invasive, optical test utilizing dynamic light scattering to characterize a platelet sample by the relative quantity of platelets, microparticles, and other particles present in the sample. ThromboLUX also determines the response of platelets to temperature changes. From this information the ThromboLUX score is calculated. Increasing scores indicate increasing numbers of discoid platelets and fewer microparticles. ThromboLUX uses calibrated polystyrene beads as a quality control standard, and accurately measures the size of the beads at multiple temperatures. Results Results from apheresis concentrates showed that ThromboLUX can determine the microparticle content in unmodified samples of platelet concentrates which correlates well with the enumeration by flow cytometry. ThromboLUX detection of microparticles and microaggregates was confirmed by microscopy. Conclusion ThromboLUX provides a comprehensive and novel analysis of platelet samples and has potential as a noninvasive routine test to characterize platelet products to identify and prevent ineffective transfusions. PMID:23652319
Filtered Rayleigh Scattering Measurements in a Buoyant Flow Field
2008-03-01
John William Strutt , the third Baron of Rayleigh , or more commonly known as Lord Rayleigh , was the first to offer a correct explanation of the...FILTERED RAYLEIGH SCATTERING MEASUREMENTS IN A BUOYANT FLOW FIELD THESIS Steven Michael Meents, Captain, USAF...AFIT/GAE/ENY/08-M22 FILTERED RAYLEIGH SCATTERING MEASUREMENTS IN A BUOYANT FLOW FIELD THESIS Presented to the Faculty Department of Aeronautics
Relativistic corrections in K-shell ionization cross sections
Sheth, C.V.
1984-03-01
Relativistic effects on a modified version of Rutherford's scattering cross section are considered up to first-order in the Born approximation for relativistic velocities in the binary-encounter approximation (BEA). The predicted cross sections with protons as projectile are lower than the previous theoretical values at low energies and are seen to be in better agreement with measurements. An approximate relativistic correction factor which accounts for orbital electrons only is compared with exact Dirac corrections, within the BEA model.
Simple model to simulate OCT-depth signal in weakly and strongly scattering homogeneous media
NASA Astrophysics Data System (ADS)
Varkentin, Arthur; Otte, Maya; Meinhardt-Wollweber, Merve; Rahlves, Maik; Mazurenka, Mikhail; Morgner, Uwe; Roth, Bernhard
2016-12-01
We present a simple and efficient Monte Carlo model to predict the scattering coefficients and the influence of multiple photon scattering with increasing concentration of scattering centers from optical coherence tomography (OCT) data. While the model reliably estimates optical sample parameters for a broad range of concentrations, it does not require inclusion of more complex phenomena such as dependent scattering. Instead, it relies on a particular weighting function which is introduced to describe various orders of multiple scattering events. In weakly scattering homogeneous media the measured scattering coefficient {μ }s depends linearly on the concentration of scattering centers. In the case of strong scattering, the dependence becomes nonlinear. Our model is able to accurately predict this nonlinearity and can be applied to extend the OCT studies of biological tissue towards determination of optical properties in the future.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
NASA Astrophysics Data System (ADS)
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-01
The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L_{max} = (l,m)_{max}, while scattering matrices, which determine spectral properties, are truncated at L_{tr} = (l,m)_{tr} where phase shifts δl>l_{tr} are negligible. Historically, L_{max} is set equal to L_{tr}, which is correct for large enough L_{max} but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L_{max} > L_{tr} with δl>l_{tr} set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R^{3} process with rank N(l_{tr} + 1)^{2}] and includes higher-L contributions via linear algebra [R^{2} process with rank N(l_{max} +1)^{2}]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1_{0} CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L_{max} for a given L_{tr}.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; ...
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax = (l,m)max, while scattering matrices, which determine spectral properties, are truncated at Ltr = (l,m)tr where phase shifts δl>ltr are negligible. Historically, Lmax is set equal to Ltr, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax > Ltr with δl>ltr set to zero [Zhang andmore » Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N(ltr + 1)2] and includes higher-L contributions via linear algebra [R2 process with rank N(lmax +1)2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L10 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Ltr.« less
Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)
THEORY OF COMPTON SCATTERING BY ANISOTROPIC ELECTRONS
Poutanen, Juri; Vurm, Indrek E-mail: indrek.vurm@oulu.f
2010-08-15
Compton scattering plays an important role in various astrophysical objects such as accreting black holes and neutron stars, pulsars, relativistic jets, and clusters of galaxies, as well as the early universe. In most of the calculations, it is assumed that the electrons have isotropic angular distribution in some frame. However, there are situations where the anisotropy may be significant due to the bulk motions, or where there is anisotropic cooling by synchrotron radiation or an anisotropic source of seed soft photons. Here we develop an analytical theory of Compton scattering by anisotropic distribution of electrons that can significantly simplify the calculations. Assuming that the electron angular distribution can be represented by a second-order polynomial over the cosine of some angle (dipole and quadrupole anisotropies), we integrate the exact Klein-Nishina cross section over the angles. Exact analytical and approximate formulae valid for any photon and electron energies are derived for the redistribution functions describing Compton scattering of photons with arbitrary angular distribution by anisotropic electrons. The analytical expressions for the corresponding photon scattering cross section on such electrons, as well as the mean energy of scattered photons, its dispersion, and radiation pressure force are also derived. We apply the developed formalism to the accurate calculations of the thermal and kinematic Sunyaev-Zeldovich effects for arbitrary electron distributions.
Gordon, H R; Zhang, T; He, F; Ding, K
1997-01-20
Using simulations, we determine the influence of stratospheric aerosol and thin cirrus clouds on the performance of the proposed atmospheric correction algorithm for the moderate resolution imaging spectroradiometer (MODIS) data over the oceans. Further, we investigate the possibility of using the radiance exiting the top of the atmosphere in the 1.38-microm water vapor absorption band to remove their effects prior to application of the algorithm. The computations suggest that for moderate optical thicknesses in the stratosphere, i.e., tau(s) < or approximately 0.15, the stratospheric aerosol-cirrus cloud contamination does not seriously degrade the MODIS except for the combination of large (approximately 60 degrees) solar zenith angles and large (approximately 45 degrees) viewing angles, for which multiple-scattering effects can be expected to be particularly severe. The performance of a hierarchy of stratospheric aerosol/cirrus cloud removal procedures for employing the 1.38-microm water vapor absorption band to correct for stratospheric aerosol/cirrus clouds, ranging from simply subtracting the reflectance at 1.38 microm from that in the visible bands, to assuming that their optical properties are known and carrying out multiple-scattering computations of their effect by the use of the 1.38-microm reflectance-derived concentration, are studied for stratospheric aerosol optical thicknesses at 865 nm as large as 0.15 and for cirrus cloud optical thicknesses at 865 nm as large as 1.0. Typically, those procedures requiring the most knowledge concerning the aerosol optical properties (and also the most complex) performed the best; however, for tau(s) < or approximately 0.15, their performance is usually not significantly better than that found by applying the simplest correction procedure. A semiempirical algorithm is presented that permits accurate correction for thin cirrus clouds with tau(s) as large as unity when an accurate estimate of the cirrus cloud
High-precision positioning of radar scatterers
NASA Astrophysics Data System (ADS)
Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.
2016-05-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Buck, Warren W.; Maung, Khin M.
1989-01-01
Two kinds of number density distributions of the nucleus, harmonic well and Woods-Saxon models, are used with the t-matrix that is taken from the scattering experiments to find a simple optical potential. The parameterized two body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to imaginary part of the forward elastic scattering amplitude, are shown. The eikonal approximation was chosen as the solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Asymptotic behavior of the efficiencies in Mie scattering
NASA Technical Reports Server (NTRS)
Acquista, C.; Cooney, J. A.; Wimp, J.; Cohen, A.
1980-01-01
Consideration is given to the asymptotic behavior of the Mie scattering and extinction efficiencies for large absorbing spheres as sphere size approaches infinity. It is shown that the method used by Chylek (1975) for evaluating the infinite sums over the Mie partial wave coefficients representing these efficiencies and proving that the extinction efficiency approaches 2 is invalid, despite the correctness of the result, and that the limiting expression for the scattering efficiency obtained by this method is also incorrect. An analytical expression is then derived from geometrical optics considerations for the scattering efficiency limit which is valid when the imaginary component of the refractive index is much less than 1.
No surprise in the first Born approximation for electron scattering.
Lentzen, M
2014-01-01
In a recent article it is argued that the far-field expansion of electron scattering, a pillar of electron diffraction theory, is wrong (Treacy and Van Dyck, 2012). It is further argued that in the first Born approximation of electron scattering the intensity of the electron wave is not conserved to first order in the scattering potential. Thus a "mystery of the missing phase" is investigated, and the supposed flaw in scattering theory is seeked to be resolved by postulating a standing spherical electron wave (Treacy and Van Dyck, 2012). In this work we show, however, that these theses are wrong. A review of the essential parts of scattering theory with careful checks of the underlying assumptions and limitations for high-energy electron scattering yields: (1) the traditional form of the far-field expansion, comprising a propagating spherical wave, is correct; (2) there is no room for a missing phase; (3) in the first Born approximation the intensity of the scattered wave is conserved to first order in the scattering potential. The various features of high-energy electron scattering are illustrated by wave-mechanical calculations for an explicit target model, a Gaussian phase object, and for a Si atom, considering the geometric conditions in high-resolution transmission electron microscopy.
Sawicki, Richard H.
1994-01-01
An improved laser correction mirror (10) for correcting aberrations in a laser beam wavefront having a rectangular mirror body (12) with a plurality of legs (14, 16, 18, 20, 22, 24, 26, 28) arranged into opposing pairs (34, 36, 38, 40) along the long sides (30, 32) of the mirror body (12). Vector force pairs (49, 50, 52, 54) are applied by adjustment mechanisms (42, 44, 46, 48) between members of the opposing pairs (34, 36, 38, 40) for bending a reflective surface 13 of the mirror body 12 into a shape defining a function which can be used to correct for comatic aberrations.
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
Accurate, in vivo NIR measurement of skeletal muscle oxygenation through fat
NASA Astrophysics Data System (ADS)
Jin, Chunguang; Zou, Fengmei; Ellerby, Gwenn E. C.; Scott, Peter; Peshlov, Boyan; Soller, Babs R.
2010-02-01
Noninvasive near infrared (NIR) spectroscopic measurement of muscle oxygenation requires the penetration of light through overlying skin and fat layers. We have previously demonstrated a dual-light source design and orthogonalization algorithm that corrects for inference from skin absorption and fat scattering. To achieve accurate muscle oxygen saturation (SmO2) measurement, one must select the appropriate source-detector distance (SD) to completely penetrate the fat layer. Methods: Six healthy subjects were supine for 15min to normalize tissue oxygenation across the body. NIR spectra were collected from the calf, shoulder, lower and upper thigh muscles with long SD distances of 30mm, 35mm, 40mm and 45mm. Spectral preprocessing with the short SD (3mm) spectrum preceded SmO2 calculation with a Taylor series expansion method. Three-way ANOVA was used to compare SmO2 values over varying fat thickness, subjects and SD distances. Results: Overlying fat layers varied in thickness from 4.9mm to 19.6mm across all subjects. SmO2 measured at the four locations were comparable for each subject (p=0.133), regardless of fat thickness and SD distance. SmO2 (mean+/-std dev) measured at calf, shoulder, low and high thigh were 62+/-3%, 59+/-8%, 61+/-2%, 61+/-4% respectively for SD distance of 30mm. In these subjects no significant influence of SD was observed (p=0.948). Conclusions: The results indicate that for our sensor design a 30mm SD is sufficient to penetrate through a 19mm fat layer and that orthogonalization with short SD effectively removed spectral interference from fat to result in a reproducible determination of SmO2.
Workshop report on new directions in x-ray scattering. [X-ray scattering
Brown, G.; Del Grande, N.K.; Fuoss, P.; Mallett, J.H.; Pratt, R.; Templeton, D.
1987-02-01
This report is a summary of the Workshop on New Directions in X-Ray Scattering held at the Asilomar Conference Center, Pacific Grove, California, April 2-5, 1985. The report primarily consists of the edited transcript of the final review session of the workshop, in which members of a panel summarized the proceedings. It is clear that we are close to achieving an accurate theory of scattering in independent particle approximation, but for edge regions, there is need to go beyond this approach. Much of what is experimentally interesting in scattering is occurring between the photoabsorption edge and the photoelectric threshold. Applications in condensed matter and biological and chemical material studies are expanding, exploiting higher intensity sources and faster time resolution as in magnetic scattering and surface studies. Storage rings are now conventional sources, and new high-intensity beam lines are under development; the free electron laser is one of the more speculative sources. Recent work in x-ray scattering has led to advances in x-ray optics, and conversely, advances in x-ray optics have benefitted our understanding of x-ray scattering.
Energy dependence of scatter components in multispectral PET imaging.
Bentourkia, M; Msaki, P; Cadorette, J; Lecomte, R
1995-01-01
High resolution images in PET based on small individual detectors are obtained at the cost of low sensitivity and increased detector scatter. These limitations can be partially overcome by enlarging discrimination windows to include more low-energy events and by developing more efficient energy-dependent methods to correct for scatter radiation from all sources. The feasibility of multispectral scatter correction was assessed by decomposing response functions acquired in multiple energy windows into four basic components: object, collimator and detector scatter, and trues. The shape and intensity of these components are different and energy-dependent. They are shown to contribute to image formation in three ways: useful (true), potentially useful (detector scatter), and undesirable (object and collimator scatter) information to the image over the entire energy range. With the Sherbrooke animal PET system, restoration of detector scatter in every energy window would allow nearly 90% of all detected events to participate in image formation. These observations suggest that multispectral acquisition is a promising solution for increasing sensitivity in high resolution PET. This can be achieved without loss of image quality if energy-dependent methods are made available to preserve useful events as potentially useful events are restored and undesirable events removed.
A full-chip DSA correction framework
NASA Astrophysics Data System (ADS)
Wang, Wei-Long; Latypov, Azat; Zou, Yi; Coskun, Tamer
2014-03-01
The graphoepitaxy DSA process relies on lithographically created confinement wells to perform directed self-assembly in the thin film of the block copolymer. These self-assembled patterns are then etch transferred into the substrate. The conventional DUV immersion or EUV lithography is still required to print these confinement wells, and the lithographic patterning residual errors propagate to the final patterns created by DSA process. DSA proximity correction (PC), in addition to OPC, is essential to obtain accurate confinement well shapes that resolve the final DSA patterns precisely. In this study, we proposed a novel correction flow that integrates our co-optimization algorithms, rigorous 2-D DSA simulation engine, and OPC tool. This flow enables us to optimize our process and integration as well as provides a guidance to design optimization. We also showed that novel RET techniques such as DSA-Aware assist feature generation can be used to improve the process window. The feasibility of our DSA correction framework on large layout with promising correction accuracy has been demonstrated. A robust and efficient correction algorithm is also determined by rigorous verification studies. We also explored how the knowledge of DSA natural pitches and lithography printing constraints provide a good guidance to establish DSA-Friendly designs. Finally application of our DSA full-chip computational correction framework to several real designs of contact-like holes is discussed. We also summarize the challenges associated with computational DSA technology.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Rayleigh Scattering Diagnostics Workshop
NASA Technical Reports Server (NTRS)
Seasholtz, Richard (Compiler)
1996-01-01
The Rayleigh Scattering Diagnostics Workshop was held July 25-26, 1995 at the NASA Lewis Research Center in Cleveland, Ohio. The purpose of the workshop was to foster timely exchange of information and expertise acquired by researchers and users of laser based Rayleigh scattering diagnostics for aerospace flow facilities and other applications. This Conference Publication includes the 12 technical presentations and transcriptions of the two panel discussions. The first panel was made up of 'users' of optical diagnostics, mainly in aerospace test facilities, and its purpose was to assess areas of potential applications of Rayleigh scattering diagnostics. The second panel was made up of active researchers in Rayleigh scattering diagnostics, and its purpose was to discuss the direction of future work.
Algorithm for Atmospheric and Glint Corrections of Satellite Measurements of Ocean Pigment
NASA Technical Reports Server (NTRS)
Fraser, Robert S.; Mattoo, Shana; Yeh, Eueng-Nan; McClain, C. R.
1997-01-01
An algorithm is developed to correct satellite measurements of ocean color for atmospheric and surface reflection effects. The algorithm depends on taking the difference between measured and tabulated radiances for deriving water-leaving radiances. 'ne tabulated radiances are related to the measured radiance where the water-leaving radiance is negligible (670 nm). The tabulated radiances are calculated for rough surface reflection, polarization of the scattered light, and multiple scattering. The accuracy of the tables is discussed. The method is validated by simulating the effect of different wind speeds than that for which the lookup table is calculated, and aerosol models different from the maritime model for which the table is computed. The derived water-leaving radiances are accurate enough to compute the pigment concentration with an error of less than q 15% for wind speeds of 6 and 10 m/s and an urban atmosphere with aerosol optical thickness of 0.20 at lambda 443 nm and decreasing to 0.10 at lambda 670 nm. The pigment accuracy is less for wind speeds less than 6 m/s and is about 30% for a model with aeolian dust. On the other hand, in a preliminary comparison with coastal zone color scanner (CZCS) measurements this algorithm and the CZCS operational algorithm produced values of pigment concentration in one image that agreed closely.
Accurate Mass Assignment of Native Protein Complexes Detected by Electrospray Mass Spectrometry
Liepold, Lars O.; Oltrogge, Luke M.; Suci, Peter; Douglas, Trevor; Young, Mark J.
2009-01-01
Correct charge state assignment is crucial to assigning an accurate mass to supramolecular complexes analyzed by electrospray mass spectrometry. Conventional charge state assignment techniques fall short of reliably and unambiguously predicting the correct charge state for many supramolecular complexes. We provide an explanation of the shortcomings of the conventional techniques and have developed a robust charge state assignment method that is applicable to all spectra. PMID:19103497
Light scattering modeling of bacteria using spheroids and cylinders
NASA Astrophysics Data System (ADS)
Feng, Chunxia; Huang, Lihua; Han, Jie; Zhou, Guangchao; Zeng, Aijun; Zhao, Yongkai; Huang, Huijie
2009-11-01
Numerical simulations of light scattering by irregularly shaped bacteria are carried out using the T-matrix method. A previously developed T-matrix code for the study of light scattering by randomly oriented non-spherical particles is used for the current purpose and it is validated against Mie-theory using coccus. Simplified particle shapes of spheroids and cylinders for simulating scattering by irregularly shaped bacteria are studied. The results for the angular distributions of the scattering matrix elements of B.Subtilis at wavelength 0.6328μm are presented. Their dependence on shape and model are discussed. Analysis suggests that spheroids perform better than cylinders for B.Subtilis. Calculations of the scatter matrix elements to determine bacteria sizes as well as shapes may be an accurate method and may be used to determine what the bacteria are.
NASA Technical Reports Server (NTRS)
Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.
1990-01-01
A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range