Sample records for scatter correction techniques

  1. Frequency-domain method for measuring spectral properties in multiple-scattering media: methemoglobin absorption spectrum in a tissuelike phantom

    NASA Astrophysics Data System (ADS)

    Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela

    1995-03-01

    We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute

  2. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  3. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  4. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    NASA Astrophysics Data System (ADS)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  5. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098

  6. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  7. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  8. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    NASA Astrophysics Data System (ADS)

    Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.

    1997-02-01

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.

  9. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  10. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  11. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept.

    PubMed

    Lee, Ho; Fahimian, Benjamin P; Xing, Lei

    2017-03-21

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  12. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  13. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  15. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.

  16. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  17. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  18. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods for /sup 201/Tl cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.

    1997-12-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  19. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  20. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    PubMed

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  1. Theoretical interpretation of the Venus 1.05-micron CO2 band and the Venus 0.8189-micron H2O line.

    NASA Technical Reports Server (NTRS)

    Regas, J. L.; Giver, L. P.; Boese, R. W.; Miller, J. H.

    1972-01-01

    The synthetic-spectrum technique was used in the analysis. The synthetic spectra were constructed with a model which takes into account both isotropic scattering and the inhomogeneity in the Venus atmosphere. The Potter-Hansen correction factor was used to correct for anisotropic scattering. The synthetic spectra obtained are, therefore, the first which contain all the essential physics of line formation. The results confirm Potter's conclusion that the Venus cloud tops resemble terrestrial cirrus or stratus clouds in their scattering properties.

  2. Surface areas of fractally rough particles studied by scattering

    NASA Astrophysics Data System (ADS)

    Hurd, Alan J.; Schaefer, Dale W.; Smith, Douglas M.; Ross, Steven B.; Le Méhauté, Alain; Spooner, Steven

    1989-05-01

    The small-angle scattering from fractally rough surfaces has the potential to give information on the surface area at a given resolution. By use of quantitative neutron and x-ray scattering, a direct comparison of surface areas of fractally rough powders was made between scattering and adsorption techniques. This study supports a recently proposed correction to the theory for scattering from fractal surfaces. In addition, the scattering data provide an independent calibration of molecular adsorbate areas.

  3. Coherent beam control through inhomogeneous media in multi-photon microscopy

    NASA Astrophysics Data System (ADS)

    Paudel, Hari Prasad

    Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.

  4. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  5. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  6. Development of online automatic detector of hydrocarbons and suspended organic matter by simultaneously acquisition of fluorescence and scattering.

    PubMed

    Mbaye, Moussa; Diaw, Pape Abdoulaye; Gaye-Saye, Diabou; Le Jeune, Bernard; Cavalin, Goulven; Denis, Lydie; Aaron, Jean-Jacques; Delmas, Roger; Giamarchi, Philippe

    2018-03-05

    Permanent online monitoring of water supply pollution by hydrocarbons is needed for various industrial plants, to serve as an alert when thresholds are exceeded. Fluorescence spectroscopy is a suitable technique for this purpose due to its sensitivity and moderate cost. However, fluorescence measurements can be disturbed by the presence of suspended organic matter, which induces beam scattering and absorption, leading to an underestimation of hydrocarbon content. To overcome this problem, we propose an original technique of fluorescence spectra correction, based on a measure of the excitation beam scattering caused by suspended organic matter on the left side of the Rayleigh scattering spectral line. This correction allowed us to obtain a statistically validated estimate of the naphthalene content (used as representative of the polyaromatic hydrocarbon contamination), regardless of the amount of suspended organic matter in the sample. Moreover, it thus becomes possible, based on this correction, to estimate the amount of suspended organic matter. By this approach, the online warning system remains operational even when suspended organic matter is present in the water supply. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Some photometric techniques for atmosphereless solar system bodies.

    PubMed

    Lumme, K; Peltoniemi, J; Irvine, W M

    1990-01-01

    We discuss various photometric techniques and their absolute scales in relation to the information that can be derived from the relevant data. We also outline a new scattering model for atmosphereless bodies in the solar system and show how it fits Mariner 10 surface photometry of the planet Mercury. It is shown how important the correct scattering law is while deriving the topography by photoclinometry.

  8. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  9. Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation

    PubMed Central

    Uribe-Patarroyo, Néstor; Bouma, Brett E.

    2015-01-01

    We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040

  10. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  11. Improved spatial resolution in PET scanners using sampling techniques

    PubMed Central

    Surti, Suleman; Scheuermann, Ryan; Werner, Matthew E.; Karp, Joel S.

    2009-01-01

    Increased focus towards improved detector spatial resolution in PET has led to the use of smaller crystals in some form of light sharing detector design. In this work we evaluate two sampling techniques that can be applied during calibrations for pixelated detector designs in order to improve the reconstructed spatial resolution. The inter-crystal positioning technique utilizes sub-sampling in the crystal flood map to better sample the Compton scatter events in the detector. The Compton scatter rejection technique, on the other hand, rejects those events that are located further from individual crystal centers in the flood map. We performed Monte Carlo simulations followed by measurements on two whole-body scanners for point source data. The simulations and measurements were performed for scanners using scintillators with Zeff ranging from 46.9 to 63 for LaBr3 and LYSO, respectively. Our results show that near the center of the scanner, inter-crystal positioning technique leads to a gain of about 0.5-mm in reconstructed spatial resolution (FWHM) for both scanner designs. In a small animal LYSO scanner the resolution improves from 1.9-mm to 1.6-mm with the inter-crystal technique. The Compton scatter rejection technique shows higher gains in spatial resolution but at the cost of reduction in scanner sensitivity. The inter-crystal positioning technique represents a modest acquisition software modification for an improvement in spatial resolution, but at a cost of potentially longer data correction and reconstruction times. The Compton scatter rejection technique, while also requiring a modest acquisition software change with no increased data correction and reconstruction times, will be useful in applications where the scanner sensitivity is very high and larger improvements in spatial resolution are desirable. PMID:19779586

  12. Effect of static scatterers in laser speckle contrast imaging: an experimental study on correlation and contrast.

    PubMed

    Vaz, Pedro G; Humeau-Heurtier, Anne; Figueiras, Edite; Correia, Carlos; Cardoso, João

    2017-12-29

    Laser speckle contrast imaging (LSCI) is a non-invasive microvascular blood flow assessment technique with good temporal and spatial resolution. Most LSCI systems, including commercial devices, can perform only qualitative blood flow evaluation, which is a major limitation of this technique. There are several factors that prevent the utilization of LSCI as a quantitative technique. Among these factors, we can highlight the effect of static scatterers. The goal of this work was to study the influence of differences in static and dynamic scatterer concentration on laser speckle correlation and contrast. In order to achieve this, a laser speckle prototype was developed and tested using an optical phantom with various concentrations of static and dynamic scatterers. It was found that the laser speckle correlation could be used to estimate the relative concentration of static/dynamic scatterers within a sample. Moreover, the speckle correlation proved to be independent of the dynamic scatterer velocity, which is a fundamental characteristic to be used in contrast correction.

  13. Effect of static scatterers in laser speckle contrast imaging: an experimental study on correlation and contrast

    NASA Astrophysics Data System (ADS)

    Vaz, Pedro G.; Humeau-Heurtier, Anne; Figueiras, Edite; Correia, Carlos; Cardoso, João

    2018-01-01

    Laser speckle contrast imaging (LSCI) is a non-invasive microvascular blood flow assessment technique with good temporal and spatial resolution. Most LSCI systems, including commercial devices, can perform only qualitative blood flow evaluation, which is a major limitation of this technique. There are several factors that prevent the utilization of LSCI as a quantitative technique. Among these factors, we can highlight the effect of static scatterers. The goal of this work was to study the influence of differences in static and dynamic scatterer concentration on laser speckle correlation and contrast. In order to achieve this, a laser speckle prototype was developed and tested using an optical phantom with various concentrations of static and dynamic scatterers. It was found that the laser speckle correlation could be used to estimate the relative concentration of static/dynamic scatterers within a sample. Moreover, the speckle correlation proved to be independent of the dynamic scatterer velocity, which is a fundamental characteristic to be used in contrast correction.

  14. WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Liu, T; Dong, X

    Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less

  15. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  16. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  17. A model-based scatter artifacts correction for cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Wei; Zhu, Jun; Wang, Luyao

    2016-04-15

    Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less

  18. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  19. Fringes in FTIR spectroscopy revisited: understanding and modelling fringes in infrared spectroscopy of thin films.

    PubMed

    Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim

    2015-06-21

    The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.

  20. Spin-analyzed SANS for soft matter applications

    NASA Astrophysics Data System (ADS)

    Chen, W. C.; Barker, J. G.; Jones, R.; Krycka, K. L.; Watson, S. M.; Gagnon, C.; Perevozchivoka, T.; Butler, P.; Gentile, T. R.

    2017-06-01

    The small angle neutron scattering (SANS) of nearly Q-independent nuclear spin-incoherent scattering from hydrogen present in most soft matter and biology samples may raise an issue in structure determination in certain soft matter applications. This is true at high wave vector transfer Q where coherent scattering is much weaker than the nearly Q-independent spin-incoherent scattering background. Polarization analysis is capable of separating coherent scattering from spin-incoherent scattering, hence potentially removing the nearly Q-independent background. Here we demonstrate SANS polarization analysis in conjunction with the time-of-flight technique for separation of coherent and nuclear spin-incoherent scattering for a sample of silver behenate back-filled with light water. We describe a complete procedure for SANS polarization analysis for separating coherent from incoherent scattering for soft matter samples that show inelastic scattering. Polarization efficiency correction and subsequent separation of the coherent and incoherent scattering have been done with and without a time-of-flight technique for direct comparisons. In addition, we have accounted for the effect of multiple scattering from light water to determine the contribution of nuclear spin-incoherent scattering in both the spin flip channel and non-spin flip channel when performing SANS polarization analysis. We discuss the possible gain in the signal-to-noise ratio for the measured coherent scattering signal using polarization analysis with the time-of-flight technique compared with routine unpolarized SANS measurements.

  1. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  2. Three-dimensional surface profile intensity correction for spatially modulated imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.

    2009-05-01

    We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.

  3. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  4. Scatter and veiling glare corrections for quantitative digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin

    1994-05-01

    In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.

  5. Scatter characterization and correction for simultaneous multiple small-animal PET imaging.

    PubMed

    Prasad, Rameshwar; Zaidi, Habib

    2014-04-01

    The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.

  6. Image Reconstruction for a Partially Collimated Whole Body PET Scanner

    PubMed Central

    Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.

    2008-01-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731

  7. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  8. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    DOE PAGES

    Moteabbed, Maryam; Niroula, Megh; Raue, Brian A.; ...

    2013-08-30

    The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has themore » potential to make precise measurements over a broad range in Q 2 and scattering angles. We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q 2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for < Q 2 >=0.206 GeV 2 and 0.830 ≤ ε ≤ 0.943. We have demonstrated that the tertiary e ± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q 2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e + and e - beams.« less

  9. Demonstration of a novel technique to measure two-photon exchange effects in elastic e±p scattering

    NASA Astrophysics Data System (ADS)

    Moteabbed, M.; Niroula, M.; Raue, B. A.; Weinstein, L. B.; Adikaram, D.; Arrington, J.; Brooks, W. K.; Lachniet, J.; Rimal, Dipak; Ungaro, M.; Afanasev, A.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bennett, R. P.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guler, N.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Kim, A.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lewis, S.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Martinez, D.; Mayer, M.; McKinnon, B.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moriya, K.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S.; Strauch, S.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.

    2013-08-01

    Background: The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections.Purpose: The TPE contributions depend on the sign of the lepton charge in e±p scattering, but the luminosities of secondary positron beams limited past measurement at large scattering angles, where the TPE effects are believe to be most significant. We present the results of a new experimental technique for making direct e±p comparisons, which has the potential to make precise measurements over a broad range in Q2 and scattering angles.Methods: We use the Jefferson Laboratory electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton.Results: The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of Q2 and scattering angle. Nonetheless, this measurement yields a data sample for e±p with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. Because we ran with only one polarity for the chicane, we are unable to study the difference between the incoming electron and positron beams. This systematic effect leads to the largest uncertainty in the final ratio of positron to electron scattering: R=1.027±0.005±0.05 for =0.206 GeV2 and 0.830⩽ɛ⩽0.943.Conclusions: We have demonstrated that the tertiary e± beam generated using this technique provides the opportunity for dramatically improved comparisons of e±p scattering, covering a significant range in both Q2 and scattering angle. Combining data with different chicane polarities will allow for detailed studies of the difference between the incoming e+ and e- beams.

  10. Quantitative assessment of scatter correction techniques incorporated in next generation dual-source computed tomography

    NASA Astrophysics Data System (ADS)

    Mobberley, Sean David

    Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.

  11. Measurement of absorbed dose with a bone-equivalent extrapolation chamber.

    PubMed

    DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B

    2002-03-01

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.

  12. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  13. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    PubMed Central

    Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902

  14. Biophotonics of skin: method for correction of deep Raman spectra distorted by elastic scattering

    NASA Astrophysics Data System (ADS)

    Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Gobinet, Cyril; Manfait, Michel; Dinten, Jean-Marc

    2015-03-01

    Confocal Raman microspectroscopy allows in-depth molecular and conformational characterization of biological tissues non-invasively. Unfortunately, spectral distortions occur due to elastic scattering. Our objective is to correct the attenuation of in-depth Raman peaks intensity by considering this phenomenon, enabling thus quantitative diagnosis. In this purpose, we developed PDMS phantoms mimicking skin optical properties used as tools for instrument calibration and data processing method validation. An optical system based on a fibers bundle has been previously developed for in vivo skin characterization with Diffuse Reflectance Spectroscopy (DRS). Used on our phantoms, this technique allows checking their optical properties: the targeted ones were retrieved. Raman microspectroscopy was performed using a commercial confocal microscope. Depth profiles were constructed from integrated intensity of some specific PDMS Raman vibrations. Acquired on monolayer phantoms, they display a decline which is increasing with the scattering coefficient. Furthermore, when acquiring Raman spectra on multilayered phantoms, the signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties of any biological sample, obtained with DRS for example, is crucial to correct properly Raman depth profiles. A model, inspired from S.L. Jacques's expression for Confocal Reflectance Microscopy and modified at some points, is proposed and tested to fit the depth profiles obtained on the phantoms as function of the reduced scattering coefficient. Consequently, once the optical properties of a biological sample are known, the intensity of deep Raman spectra distorted by elastic scattering can be corrected with our reliable model, permitting thus to consider quantitative studies for purposes of characterization or diagnosis.

  15. Transmittance and scattering during wound healing after refractive surgery

    NASA Astrophysics Data System (ADS)

    Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.

    2004-10-01

    Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.

  16. On μe-scattering at NNLO in QED

    NASA Astrophysics Data System (ADS)

    Mastrolia, P.; Passera, M.; Primo, A.; Schubert, U.; Torres Bobadilla, W. J.

    2018-05-01

    We report on the current status of the analytic evaluation of the two-loop corrections to the μescattering in Quantum Electrodynamics, presenting state-of-the art techniques which have been developed to address this challenging task.

  17. Estimation of Soil Moisture with L-band Multi-polarization Radar

    NASA Technical Reports Server (NTRS)

    Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.

    2004-01-01

    Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.

  18. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  19. Inductively coupled plasma atomic fluorescence spectrometric determination of cadmium, copper, iron, lead, manganese and zinc

    USGS Publications Warehouse

    Sanzolone, R.F.

    1986-01-01

    An inductively coupled plasma atomic fluorescence spectrometric method is described for the determination of six elements in a variety of geological materials. Sixteen reference materials are analysed by this technique to demonstrate its use in geochemical exploration. Samples are decomposed with nitric, hydrofluoric and hydrochloric acids, and the residue dissolved in hydrochloric acid and diluted to volume. The elements are determined in two groups based on compatibility of instrument operating conditions and consideration of crustal abundance levels. Cadmium, Cu, Pb and Zn are determined as a group in the 50-ml sample solution under one set of instrument conditions with the use of scatter correction. Limitations of the scatter correction technique used with the fluorescence instrument are discussed. Iron and Mn are determined together using another set of instrumental conditions on a 1-50 dilution of the sample solution without the use of scatter correction. The ranges of concentration (??g g-1) of these elements in the sample that can be determined are: Cd, 0.3-500; Cu, 0.4-500; Fe, 85-250 000; Mn, 45-100 000; Pb, 5-10 000; and Zn, 0.4-300. The precision of the method is usually less than 5% relative standard deviation (RSD) over a wide concentration range and acceptable accuracy is shown by the agreement between values obtained and those recommended for the reference materials.

  20. A study on scattering correction for γ-photon 3D imaging test method

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao

    2018-03-01

    A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.

  1. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation dose efficiency improvement in multi-energy photon-counting CT, and can mitigate scatter-induced shading artifacts in cone-beam CT in full-fan and half-fan modes.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less

  3. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  4. Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.

  5. Robust scatter correction method for cone-beam CT using an interlacing-slit plate

    NASA Astrophysics Data System (ADS)

    Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long

    2016-06-01

    Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)

  6. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  7. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  8. Dose measurement in heterogeneous phantoms with an extrapolation chamber

    NASA Astrophysics Data System (ADS)

    Deblois, Francois

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water(TM) and bone-equivalent material was used for determining absolute dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x-rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The air gaps used were between 2 and 3 mm and the sensitive air volume of the extrapolation chamber was remotely controlled through the motion of the motorized piston with a precision of +/-0.0025 mm. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain dose data for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC from 0.7 to ˜2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water(TM) PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). The collecting electrode material in comparison with the polarizing electrode material has a larger effect on the electrode correction factor; the thickness of thin electrodes, on the other hand, has a negligible effect on dose determination. The uncalibrated hybrid PEEC is an accurate and absolute device for measuring the dose directly in bone material in conjunction with appropriate correction factors determined with Monte Carlo techniques.

  9. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  10. Determination of morphological parameters of biological cells by analysis of scattered-light distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, D.E.

    1979-11-01

    The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less

  11. Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong

    2017-03-01

    Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.

  12. Efficient and robust analysis of complex scattering data under noise in microwave resonators.

    PubMed

    Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M

    2015-02-01

    Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.

  13. Remote sensing science for the Nineties; Proceedings of IGARSS '90 - 10th Annual International Geoscience and Remote Sensing Symposium, University of Maryland, College Park, May 20-24, 1990. Vols. 1, 2, & 3

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Various papers on remote sensing (RS) for the nineties are presented. The general topics addressed include: subsurface methods, radar scattering, oceanography, microwave models, atmospheric correction, passive microwave systems, RS in tropical forests, moderate resolution land analysis, SAR geometry and SNR improvement, image analysis, inversion and signal processing for geoscience, surface scattering, rain measurements, sensor calibration, wind measurements, terrestrial ecology, agriculture, geometric registration, subsurface sediment geology, radar modulation mechanisms, radar ocean scattering, SAR calibration, airborne radar systems, water vapor retrieval, forest ecosystem dynamics, land analysis, multisensor data fusion. Also considered are: geologic RS, RS sensor optical measurements, RS of snow, temperature retrieval, vegetation structure, global change, artificial intelligence, SAR processing techniques, geologic RS field experiment, stochastic modeling, topography and Digital Elevation model, SAR ocean waves, spaceborne lidar and optical, sea ice field measurements, millimeter waves, advanced spectroscopy, spatial analysis and data compression, SAR polarimetry techniques. Also discussed are: plant canopy modeling, optical RS techniques, optical and IR oceanography, soil moisture, sea ice back scattering, lightning cloud measurements, spatial textural analysis, SAR systems and techniques, active microwave sensing, lidar and optical, radar scatterometry, RS of estuaries, vegetation modeling, RS systems, EOS/SAR Alaska, applications for developing countries, SAR speckle and texture.

  14. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  15. Improvement of scattering correction for in situ coastal and inland water absorption measurement using exponential fitting approach

    NASA Astrophysics Data System (ADS)

    Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan

    2017-10-01

    The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.

  16. DISSOLVED ORGANIC FLUOROPHORES IN SOUTHEASTERN US COASTAL WATERS: CORRECTION METHOD FOR ELIMINATING RAYLEIGH AND RAMAN SCATTERING PEAKS IN EXCITATION-EMISSION MATRICES

    EPA Science Inventory

    Fluorescence-based observations provide useful, sensitive information concerning the nature and distribution of colored dissolved organic matter (CDOM) in coastal and freshwater environments. The excitation-emission matrix (EEM) technique has become widely used for evaluating sou...

  17. Atmospheric correction for inland water based on Gordon model

    NASA Astrophysics Data System (ADS)

    Li, Yunmei; Wang, Haijun; Huang, Jiazhu

    2008-04-01

    Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.

  18. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  19. WE-EF-207-03: Design and Optimization of a CBCT Head Scanner for Detection of Acute Intracranial Hemorrhage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Sisniega, A; Zbijewski, W

    Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less

  20. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  1. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  2. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  3. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less

  4. Regional Cerebral Blood Flow Analysis in Patients with Multiple Sclerosis Using TC-99M Hmpao and a Three - Spect System.

    NASA Astrophysics Data System (ADS)

    D'Souza, Maximian Felix

    1995-01-01

    The purpose of the present study was to determine the changes in regional cerebral blood flow (rCBF) with a cognitive task of semantic word retrieval (verbal fluency) in patients with multiple sclerosis (MS) and compare with the rCBF distribution of normal controls. Two groups of patients with low and high verbal fluency scores and two groups of normal controls were selected to determine a relationship between rCBF and verbal performance. A three-detector gamma camera (TRIAD 88) was used with radiotracer Tc-99m HMPAO and single photon emission computed tomography (SPECT) to obtain 3D rCBF maps. The performance characteristics of the camera was comprehensively studied before being utilized for clinical studies. In addition, technical improvements were implemented in the form of scatter correction and MRI-SPECT coregistration to potentially enhance the quantitative accuracy of the rCBF data. The performance analysis of the gamma camera showed remarkable consistency among the three-detector heads and yielded results that were consistent with the manufacturer's specification. Measurements of physical objects also showed excellent image quality. The coregistration of SPECT and MRI images allowed more accurate anatomical localization for extraction of regional blood flow information. The validation of the scatter correction technique with physical phantoms indicated marked improvements in quantitative accuracy. There was marked difference in activation patterns between patients and normals. In normals, individually subjects showed either an increase or a decrease in blood flow to left frontal and temporal, however, on average, there was not a statistically significant change. The lack of significant change may suggest large variability among subjects chosen or that the individual changes are not large enough to be significant. The results from MS patients showed several left cortical areas with statistically significant change in blood flow after cognitive activation, especially in the low fluent group, with decreased flow. Scatter corrected data yielded mostly right sided significant increases in blood flow. Further studies must be conducted to further evaluate the scatter correction technique. Additional studies on MS patients must focus on correlating lesion volume, location and number to the rCBF distribution.

  5. Optimization of a simultaneous dual-isotope 201Tl/123I-MIBG myocardial SPECT imaging protocol with a CZT camera for trigger zone assessment after myocardial infarction for routine clinical settings: Are delayed acquisition and scatter correction necessary?

    PubMed

    D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis

    2017-08-01

    Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.

  6. A single-scattering correction for the seismo-acoustic parabolic equation.

    PubMed

    Collins, Michael D

    2012-04-01

    An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.

  7. Tomographic imaging of bone composition using coherently scattered x rays

    NASA Astrophysics Data System (ADS)

    Batchelar, Deidre L.; Dabrowski, W.; Cunningham, Ian A.

    2000-04-01

    Bone tissue consists primarily of calcium hydroxyapatite crystals (bone mineral) and collagen fibrils. Bone mineral density (BMD) is commonly used as an indicator of bone health. Techniques available at present for assessing bone health provide a measure of BMD, but do not provide information about the degree of mineralization of the bone tissue. This may be adequate for assessing diseases in which the collagen-mineral ratio remains constant, as assumed in osteoporosis, but is insufficient when the mineralization state is known to change, as in osteomalacia. No tool exists for the in situ examination of collagen and hydroxyapatite density distributions independently. Coherent-scatter computed tomography (CSCT) is a technique we are developing that produces images of the low- angle scatter properties of tissue. These depend on the molecular structure of the scatterer making it possible to produce material-specific maps of each component in a conglomerate. After corrections to compensate for exposure fluctuations, self-attenuation of scatter and the temporal response of the image intensifier, material-specific images of mineral, collagen, fat and water distributions are obtained. The gray-level in these images provides the volumetric density of each component independently.

  8. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  9. Hot spot variability and lithography process window investigation by CDU improvement using CDC technique

    NASA Astrophysics Data System (ADS)

    Thamm, Thomas; Geh, Bernd; Djordjevic Kaufmann, Marija; Seltmann, Rolf; Bitensky, Alla; Sczyrba, Martin; Samy, Aravind Narayana

    2018-03-01

    Within the current paper, we will concentrate on the well-known CDC technique from Carl Zeiss to improve the CD distribution of the wafer by improving the reticle CDU and its impact on hotspots and Litho process window. The CDC technique uses an ultra-short pulse laser technology, which generates a micro-level Shade-In-Element (also known as "Pixels") into the mask quartz bulk material. These scatter centers are able to selectively attenuate certain areas of the reticle in higher resolution compared to other methods and thus improve the CD uniformity. In a first section, we compare the CDC technique with scanner dose correction schemes. It becomes obvious, that the CDC technique has unique advantages with respect to spatial resolution and intra-field flexibility over scanner correction schemes, however, due to the scanner flexibility across wafer both methods are rather complementary than competing. In a second section we show that a reference feature based correction scheme can be used to improve the CDU of a full chip with multiple different features that have different MEEF and dose sensitivities. In detail we will discuss the impact of forward scattering light originated by the CDC pixels on the illumination source and the related proximity signature. We will show that the impact on proximity is small compared to the CDU benefit of the CDC technique. Finally we show to which extend the reduced variability across reticle will result in a better common electrical process window of a whole chip design on the whole reticle field on wafer. Finally we will discuss electrical verification results between masks with purposely made bad CDU that got repaired by the CDC technique versus inherently good "golden" masks on a complex logic device. No yield difference is observed between the repaired bad masks and the masks with good CDU.

  10. Clutter suppression and classification using twin inverted pulse sonar in ship wakes.

    PubMed

    Leighton, T G; Finfer, D C; Chua, G H; White, P R; Dix, J K

    2011-11-01

    Twin inverted pulse sonar (TWIPS) is here deployed in the wake of a moored rigid inflatable boat (RIB) with propeller turning, and then in the wake of a moving tanker of 4580 dry weight tonnage (the Whitchallenger). This is done first to test its ability to distinguish between scatter from the wake and scatter from the seabed, and second to test its ability to improve detectability of the seabed through the wake, compared to conventional sonar processing techniques. TWIPS does this by distinguishing between linear and nonlinear scatterers and has the further property of distinguishing those nonlinear targets which scatter energy at the even-powered harmonics from those which scatter in the odd-powered harmonics. TWIPS can also, in some manifestations, require no range correction (and therefore does not require the a priori environment knowledge necessary for most remote detection technologies).

  11. Dose and scatter characteristics of a novel cone beam CT system for musculoskeletal extremities

    NASA Astrophysics Data System (ADS)

    Zbijewski, W.; Sisniega, A.; Vaquero, J. J.; Muhit, A.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Carrino, J. A.; Siewerdsen, J. H.

    2012-03-01

    A novel cone-beam CT (CBCT) system has been developed with promising capabilities for musculoskeletal imaging (e.g., weight-bearing extremities and combined radiographic / volumetric imaging). The prototype system demonstrates diagnostic-quality imaging performance, while the compact geometry and short scan orbit raise new considerations for scatter management and dose characterization that challenge conventional methods. The compact geometry leads to elevated, heterogeneous x-ray scatter distributions - even for small anatomical sites (e.g., knee or wrist), and the short scan orbit results in a non-uniform dose distribution. These complex dose and scatter distributions were investigated via experimental measurements and GPU-accelerated Monte Carlo (MC) simulation. The combination provided a powerful basis for characterizing dose distributions in patient-specific anatomy, investigating the benefits of an antiscatter grid, and examining distinct contributions of coherent and incoherent scatter in artifact correction. Measurements with a 16 cm CTDI phantom show that the dose from the short-scan orbit (0.09 mGy/mAs at isocenter) varies from 0.16 to 0.05 mGy/mAs at various locations on the periphery (all obtained at 80 kVp). MC estimation agreed with dose measurements within 10-15%. Dose distribution in patient-specific anatomy was computed with MC, confirming such heterogeneity and highlighting the elevated energy deposition in bone (factor of ~5-10) compared to soft-tissue. Scatter-to-primary ratio (SPR) up to ~1.5-2 was evident in some regions of the knee. A 10:1 antiscatter grid was found earlier to result in significant improvement in soft-tissue imaging performance without increase in dose. The results of MC simulations elucidated the mechanism behind scatter reduction in the presence of a grid. A ~3-fold reduction in average SPR was found in the MC simulations; however, a linear grid was found to impart additional heterogeneity in the scatter distribution, mainly due to the increase in the contribution of coherent scatter with increased spatial variation. Scatter correction using MC-generated scatter distributions demonstrated significant improvement in cupping and streaks. Physical experimentation combined with GPU-accelerated MC simulation provided a sophisticated, yet practical approach in identifying low-dose acquisition techniques, optimizing scatter correction methods, and evaluating patientspecific dose.

  12. Compact Polarimetry in a Low Frequency Spaceborne Context

    NASA Technical Reports Server (NTRS)

    Truong-Loi, M-L.; Freeman, A.; Dubois-Fernandez, P.; Pottier, E.

    2011-01-01

    Compact polarimetry has been shown to be an interesting alternative mode to full polarimetry when global coverage and revisit time are key issues. It consists on transmitting a single polarization, while receiving on two. Several critical points have been identified, one being the Faraday rotation (FR) correction and the other the calibration. When a low frequency electromagnetic wave travels through the ionosphere, it undergoes a rotation of the polarization plane about the radar line of sight for a linearly polarized wave, and a simple phase shift for a circularly polarized wave. In a low frequency radar, the only possible choice of the transmit polarization is the circular one, in order to guaranty that the scattering element on the ground is illuminated with a constant polarization independently of the ionosphere state. This will allow meaningful time series analysis, interferometry as long as the Faraday rotation effect is corrected for the return path. In full-polarimetric (FP) mode, two techniques allow to estimate the FR: Freeman method using linearly polarized data, and Bickel and Bates theory based on the transformation of the measured scattering matrix to a circular basis. In CP mode, an alternate procedure is presented which relies on the bare surface scattering properties. These bare surfaces are selected by the conformity coefficient, invariant with FR. This coefficient is compared to other published classifications to show its potential in distinguishing three different scattering types: surface, doublebounce and volume. The performances of the bare surfaces selection and FR estimation are evaluated on PALSAR and airborne data. Once the bare surfaces are selected and Faraday angle estimated over them, the correction can be applied over the whole scene. The algorithm is compared with both FP techniques. In the last part of the paper, the calibration of a CP system from the point of view of classical matrix transformation methods in polarimetry is proposed.

  13. Adaptation of the University of Wisconsin High Spectral Resolution Lidar for Polarization and Multiple Scattering Measurements

    NASA Technical Reports Server (NTRS)

    Eloranta, E. W.; Piironen, P. K.

    1996-01-01

    Quantitative lidar measurements of aerosol scattering are hampered by the need for calibrations and the problem of correcting observed backscatter profiles for the effects of attenuation. The University of Wisconsin High Spectral Resolution Lidar (HSRL) addresses these problems by separating molecular scattering contributions from the aerosol scattering; the molecular scattering is then used as a calibration target that is available at each point in the observed profiles. While the HSRl approach has intrinsic advantages over competing techniques, realization of these advantages requires implementation of a technically demanding system which is potentially very sensitive to changes in temperature and mechanical alignments. This paper describes a new implementation of the HSRL in an instrumented van which allows measurements during field experiments. The HSRL was modified to measure depolarization. In addition, both the signal amplitude and depolarization variations with receiver field of view are simultaneously measured. This allows for discrimination of ice clouds from water clouds and observation of multiple scattering contributions to the lidar return.

  14. Curvature of blended rolled edge reflectors at the shadow boundary contour

    NASA Technical Reports Server (NTRS)

    Ellingson, S. W.

    1988-01-01

    A technique is advanced for computing the radius of curvature of blended rolled edge reflector surfaces at the shadow boundary, in the plane perpendicular to the shadow boundary contour. This curvature must be known in order to compute the spurious endpoint contributions in the physical optics (PO) solution for the scattering from reflectors with rolled edges. The technique is applicable to reflectors with radially-defined rim-shapes and rolled edge terminations. The radius of curvature for several basic reflector systems is computed, and it is shown that this curvature can vary greatly along the shadow boundary contour. Finally, the total PO field in the target zone of a sample compact range system is computed and corrected using the shadow boundary radius of curvature, obtained using the technique. It is shown that the fields obtained are a better approximation to the true scattered fields.

  15. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    PubMed

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  16. Simulating the influence of scatter and beam hardening in dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Lifton, J. J.; Carmignato, S.

    2017-10-01

    Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.

  17. Characterization of scatter in digital mammography from use of Monte Carlo simulations and comparison to physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-11-01

    Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less

  18. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  19. Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.

    PubMed

    Kappadath, S Cheenu; Shaw, Chris C

    2005-11-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.

  20. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kappadath, S. Cheenu; Shaw, Chris C.

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less

  1. Experimental validation of a multi-energy x-ray adapted scatter separation method

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-12-01

    Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.

  2. A Q-Band Free-Space Characterization of Carbon Nanotube Composites

    PubMed Central

    Hassan, Ahmed M.; Garboczi, Edward J.

    2016-01-01

    We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959

  3. Novel measuring strategies in neutron interferometry

    NASA Astrophysics Data System (ADS)

    Bonse, Ulrich; Wroblewski, Thomas

    1985-04-01

    Angular misalignment of a sample in a single crystal neutron interferometer leads to systematic errors of the effective sample thickness and in this way to errors in the determination of the coherent scattering length. The misalignment can be determined and the errors can be corrected by a second measurement at a different angular sample position. Furthermore, a method has been developed which allows supervision of the wavelength during the measurements. These two techniques were tested by determining the scattering length of copper. A value of bc = 7.66(4) fm was obtained which is in excellent agreement with previous measurements.

  4. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  5. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  6. SU-C-201-02: Quantitative Small-Animal SPECT Without Scatter Correction Using High-Purity Germanium Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, A; Peterson, T; Johnson, L

    2015-06-15

    Purpose: To evaluate the impact of the exceptional energy resolution of germanium detectors for preclinical SPECT in comparison to conventional detectors. Methods: A cylindrical water phantom was created in GATE with a spherical Tc-99m source in the center. Sixty-four projections over 360 degrees using a pinhole collimator were simulated. The same phantom was simulated using air instead of water to establish the true reconstructed voxel intensity without attenuation. Attenuation correction based on the Chang method was performed on MLEM reconstructed images from the water phantom to determine a quantitative measure of the effectiveness of the attenuation correction. Similarly, a NEMAmore » phantom was simulated, and the effectiveness of the attenuation correction was evaluated. Both simulations were carried out using both NaI detectors with an energy resolution of 10% FWHM and Ge detectors with an energy resolution of 1%. Results: Analysis shows that attenuation correction without scatter correction using germanium detectors can reconstruct a small spherical source to within 3.5%. Scatter analysis showed that for standard sized objects in a preclinical scanner, a NaI detector has a scatter-to-primary ratio between 7% and 12.5% compared to between 0.8% and 1.5% for a Ge detector. Preliminary results from line profiles through the NEMA phantom suggest that applying attenuation correction without scatter correction provides acceptable results for the Ge detectors but overestimates the phantom activity using NaI detectors. Due to the decreased scatter, we believe that the spillover ratio for the air and water cylinders in the NEMA phantom will be lower using germanium detectors compared to NaI detectors. Conclusion: This work indicates that the superior energy resolution of germanium detectors allows for less scattered photons to be included within the energy window compared to traditional SPECT detectors. This may allow for quantitative SPECT without implementing scatter correction, reducing uncertainties introduced by scatter correction algorithms. Funding provided by NIH/NIBIB grant R01EB013677; Todd Peterson, Ph.D., has had a research contract with PHDs Co., Knoxville, TN.« less

  7. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  8. Production of thin targets by implantation for the measurement of the 16O + 16O elastic scattering below the Coulomb barrier

    NASA Astrophysics Data System (ADS)

    Silva, H.; Cruz, J.; Sánchez-Benítez, A. M.; Santos, C.; Luís, H.; Fonseca, M.; Jesus, A. P.

    2017-09-01

    In recent decades, the processes of fusion of 16O were studied both theoretically and experimentally. However, the theoretical calculations are unable to fit both elastic scattering cross sections and fusion S-factors. The use of 16O thin transmission targets is required to measure the elastic forward scattering 16O + 16O reaction. The areal density of the target must be high to maximize the reaction products yields, but not so high as to allow a correct calculation of the effective beam energy. Besides this, the target must withstand beam interactions without noticeable deterioration, and contaminants must be minimal. In this study, the production of thin targets is performed with an innovative technique. Beam characterization and preliminary spectrum for the elastic scattering are also presented, showing the suitability of these targets for the proposed reaction.

  9. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less

  10. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  11. New Examination of the Raman Lidar Technique for Water Vapor and Aerosols. Paper 1; Evaluating the Temperature Dependent Lidar Equations

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.

    2003-01-01

    The intent of this paper and its companion is to compile together the essential information required for the analysis of Raman lidar water vapor and aerosol data acquired using a single laser wavelength. In this first paper several details concerning the evaluation of the lidar equation when measuring Raman scattering are considered. These details include the influence of the temperature dependence of both pure rotational and vibrational-rotational Raman scattering on the lidar profile. These are evaluated for the first time using a new form of the lidar equation. The results indicate that, for the range of temperatures encountered in the troposphere, the magnitude of the temperature dependent effect can reach 10% or more for narrowband Raman water vapor measurements. Also the calculation of atmospheric transmission is examined carefully including the effects of depolarization. Different formulations of Rayleigh cross section determination commonly used in the lidar field are compared revealing differences up to 5% among the formulations. The influence of multiple scattering on the measurement of aerosol extinction using the Raman lidar technique is considered as are several photon pulse-pileup correction techniques.

  12. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  13. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  14. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  15. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  16. Reversal of photon-scattering errors in atomic qubits.

    PubMed

    Akerman, N; Kotler, S; Glickman, Y; Ozeri, R

    2012-09-07

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  17. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  18. Analysis of position-dependent Compton scatter in scintimammography with mild compression

    NASA Astrophysics Data System (ADS)

    Williams, M. B.; Narayanan, D.; More, M. J.; Goodale, P. J.; Majewski, S.; Kieper, D. A.

    2003-10-01

    In breast scintigraphy using /sup 99m/Tc-sestamibi the relatively low radiotracer uptake in the breast compared to that in other organs such as the heart results in a large fraction of the detected events being Compton scattered gamma-rays. In this study, our goal was to determine whether generalized conclusions regarding scatter-to-primary ratios at various locations within the breast image are possible, and if so, to use them to make explicit scatter corrections to the breast scintigrams. Energy spectra were obtained from patient scans for contiguous regions of interest (ROIs) centered left to right within the image of the breast, and extending from the chest wall edge of the image to the anterior edge. An anthropomorphic torso phantom with fillable internal organs and a compressed-shape breast containing water only was used to obtain realistic position-dependent scatter-only spectra. For each ROI, the measured patient energy spectrum was fitted with a linear combination of the scatter-only spectrum from the anthropomorphic phantom and the scatter-free spectrum from a point source. We found that although there is a very strong dependence on location within the breast of the scatter-to-primary ratio, the spectra are well modeled by a linear combination of position-dependent scatter-only spectra and a position-independent scatter-free spectrum, resulting in a set of position-dependent correction factors. These correction factors can be used along with measured emission spectra from a given breast to correct for the Compton scatter in the scintigrams. However, the large variation among patients in the magnitude of the position-dependent scatter makes the success of universal correction approaches unlikely.

  19. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    PubMed

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  20. Virtual Excitation and Multiple Scattering Correction Terms to the Neutron Index of Refraction for Hydrogen.

    PubMed

    Schoen, K; Snow, W M; Kaiser, H; Werner, S A

    2005-01-01

    The neutron index of refraction is generally derived theoretically in the Fermi approximation. However, the Fermi approximation neglects the effects of the binding of the nuclei of a material as well as multiple scattering. Calculations by Nowak introduced correction terms to the neutron index of refraction that are quadratic in the scattering length and of order 10(-3) fm for hydrogen and deuterium. These correction terms produce a small shift in the final value for the coherent scattering length of H2 in a recent neutron interferometry experiment.

  1. Operational atmospheric correction of AVHRR visible and infrared data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermote, E.; El Saleous, N.; Roger, J.C.

    1995-12-31

    The satellite level radiance is affected by the presence of the atmosphere between the sensor and the target. The ozone and water vapor absorption bands affect the signal recorded by the AVHRR visible and near infrared channels respectively. The Rayleigh scattering mainly affects the visible channel and is more pronounced when dealing with small sun elevations and large view angles. The aerosol scattering affects both channels and is certainly the most challenging term for atmospheric correction because of the spatial and temporal variability of both the type and amount of particles in the atmosphere. This paper presents the equation ofmore » the satellite signal, the scheme to retrieve atmospheric properties and corrections applied to AVHRR observations. The operational process uses TOMS data and a digital elevation model to correct for ozone absorption and rayleigh scattering. The water vapor content is evaluated using the split-window technique that is validated over ocean using 1988 SSM/I data. The aerosol amount retrieval over Ocean is achieved in channels 1 and 2 and compared to sun photometer observations to check consistency of the radiative transfer model and the sensor calibration. Over land, the method developed uses reflectance at 3.75 microns to deduce target reflectance in channel 1 and retrieve aerosol optical thickness that can be extrapolated in channel 2. The method to invert the reflectance at 3.75 microns is based on MODTRAN simulations and is validated by comparison to measurements performed during FIFE 87. Finally, aerosol optical thickness retrieved over Brazil and Eastern US is compared to sun photometer measurements.« less

  2. Processing techniques for global land 1-km AVHRR data

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Steinwand, Daniel R.; Wivell, Charles E.; Hollaren, Douglas M.; Meyer, David

    1993-01-01

    The U.S. Geological Survey's (USGS) Earth Resources Observation Systems (EROS) Data Center (EDC) in cooperation with several international science organizations has developed techniques for processing daily Advanced Very High Resolution Radiometer (AVHRR) 1-km data of the entire global land surface. These techniques include orbital stitching, geometric rectification, radiometric calibration, and atmospheric correction. An orbital stitching algorithm was developed to combine consecutive observations acquired along an orbit by ground receiving stations into contiguous half-orbital segments. The geometric rectification process uses an AVHRR satellite model that contains modules for forward mapping, forward terrain correction, and inverse mapping with terrain correction. The correction is accomplished by using the hydrologic features coastlines and lakes from the Digital Chart of the World. These features are rasterized into the satellite projection and are matched to the AVHRR imagery using binary edge correlation techniques. The resulting coefficients are related to six attitude correction parameters: roll, roll rate, pitch, pitch rate, yaw, and altitude. The image can then be precision corrected to a variety of map projections and user-selected image frames. Because the AVHRR lacks onboard calibration for the optical wavelengths, a series of time-variant calibration coefficients derived from vicarious calibration methods and are used to model the degradation profile of the instruments. Reducing atmospheric effects on AVHRR data is important. A method has been develop that will remove the effects of molecular scattering and absorption from clear sky observations, using climatological measurements of ozone. Other methods to remove the effects of water vapor and aerosols are being investigated.

  3. Event-Based Processing of Neutron Scattering Data

    DOE PAGES

    Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...

    2015-09-16

    Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less

  4. A comparison of observed and analytically derived remote sensing penetration depths for turbid water

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Guraus, E. A.

    1981-01-01

    The depth to which sunlight will penetrate in turbid waters was investigated. The tests were conducted in water with a single scattering albedo range, and over a range of solar elevation angles. Two different techniques were used to determine the depth of light penetration. It showed little change in the depth of sunlight penetration with changing solar elevation angle. A comparison of the penetration depths indicates that the best agreement between the two methods was achieved when the quasisingle scattering relationship was not corrected for solar angle. It is concluded that sunlight penetration is dependent on inherent water properties only.

  5. Soft-photon emission effects and radiative corrections for electromagnetic processes at very high energies

    NASA Technical Reports Server (NTRS)

    Gould, R. J.

    1979-01-01

    Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.

  6. Correction of autofluorescence intensity for epithelial scattering by optical coherence tomography: a phantom study

    NASA Astrophysics Data System (ADS)

    Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.

    2013-03-01

    In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.

  7. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  8. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Vickie E.; Borreguero, Jose M.; Bhowmik, Debsindhu

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parametersmore » which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.« less

  9. Scatter and cross-talk correction for one-day acquisition of 123I-BMIPP and 99mtc-tetrofosmin myocardial SPECT.

    PubMed

    Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo

    2004-12-01

    123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p < 0.05 was considered significant. The percentage of scatter and cross-talk relative to the corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p < 0.001). Scatter and cross-talk correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one-day acquisition of 123I-BMIPP and 99mTc-tetrofosmin SPECT.

  10. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  11. A novel scatter separation method for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-06-01

    X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor  >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.

  12. Scanning in situ Spectroscopy platform for imaging surgical breast tissue specimens

    PubMed Central

    Krishnaswamy, Venkataramanan; Laughney, Ashley M.; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.

    2013-01-01

    A non-contact localized spectroscopic imaging platform has been developed and optimized to scan 1x1cm2 square regions of surgically resected breast tissue specimens with ~150-micron resolution. A color corrected, image-space telecentric scanning design maintained a consistent sampling geometry and uniform spot size across the entire imaging field. Theoretical modeling in ZEMAX allowed estimation of the spot size, which is equal at both the center and extreme positions of the field with ~5% variation across the designed waveband, indicating excellent color correction. The spot sizes at the center and an extreme field position were also measured experimentally using the standard knife-edge technique and were found to be within ~8% of the theoretical predictions. Highly localized sampling offered inherent insensitivity to variations in background absorption allowing direct imaging of local scattering parameters, which was validated using a matrix of varying concentrations of Intralipid and blood in phantoms. Four representative, pathologically distinct lumpectomy tissue specimens were imaged, capturing natural variations in tissue scattering response within a given pathology. Variations as high as 60% were observed in the average reflectance and relative scattering power images, which must be taken into account for robust classification performance. Despite this variation, the preliminary data indicates discernible scatter power contrast between the benign vs malignant groups, but reliable discrimination of pathologies within these groups would require investigation into additional contrast mechanisms. PMID:23389199

  13. Neutron elastic and inelastic cross section measurements for 28Si

    NASA Astrophysics Data System (ADS)

    Derdeyn, E. C.; Lyons, E. M.; Morin, T.; Hicks, S. F.; Vanhoy, J. R.; Peters, E. E.; Ramirez, A. P. D.; McEllistrem, M. T.; Mukhopadhyay, S.; Yates, S. W.

    2017-09-01

    Neutron elastic and inelastic cross sections are critical for design and implementation of nuclear reactors and reactor equipment. Silicon, an element used abundantly in fuel pellets as well as building materials, has little to no experimental cross sections in the fast neutron region to support current theoretical evaluations, and thus would benefit from any contribution. Measurements of neutron elastic and inelastic differential scattering cross sections for 28Si were performed at the University of Kentucky Accelerator Laboratory for incident neutron energies of 6.1 MeV and 7.0 MeV. Neutrons were produced by accelerated deuterons incident on a deuterium gas cell. These nearly mono-energetic neutrons then scattered off a natural Si sample and were detected using liquid deuterated benzene scintillation detectors. Scattered neutron energy was deduced using time-of-flight techniques in tandem with kinematic calculations for an angular distribution. The relative detector efficiency was experimentally determined over a neutron energy range from approximately 0.5 to 7.75 MeV prior to the experiment. Yields were corrected for multiple scattering and neutron attenuation in the sample using the forced-collision Monte Carlo correction code MULCAT. Resulting cross sections will be presented along with comparisons to various data evaluations. Research is supported by USDOE-NNSA-SSAP: NA0002931, NSF: PHY-1606890, and the Donald A. Cowan Physics Institute at the University of Dallas.

  14. CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel

    2015-12-20

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less

  15. Infrared weak corrections to strongly interacting gauge boson scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-04-15

    We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.

  16. ITERATIVE SCATTER CORRECTION FOR GRID-LESS BEDSIDE CHEST RADIOGRAPHY: PERFORMANCE FOR A CHEST PHANTOM.

    PubMed

    Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich

    2016-06-01

    The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  18. Scatter and beam hardening reduction in industrial computed tomography using photon counting detectors

    NASA Astrophysics Data System (ADS)

    Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael

    2018-07-01

    Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.

  19. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    NASA Astrophysics Data System (ADS)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  20. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    NASA Astrophysics Data System (ADS)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  1. A New Clinical Instrument for The Early Detection of Cataract Using Dynamic Light Scattering and Corneal Topography

    NASA Technical Reports Server (NTRS)

    Ansari, Rafat R.; Datiles, Manuel B., III; King, James F.

    2000-01-01

    A growing cataract can be detected at the molecular level using the technique of dynamic light scattering (DLS). However, the success of this method in clinical use depends upon the precise control of the scattering volume inside a patient's eye and especially during patient's repeat visits. This is important because the scattering volume (cross-over region between the scattered fight and incident light) inside the eye in a high-quality DLS set-up is very small (few microns in dimension). This precise control holds the key for success in the longitudinal studies of cataract and during anti-cataract drug screening. We have circumvented these problems by fabricating a new DLS fiber optic probe with a working distance of 40 mm and by mounting it inside a cone of a corneal analyzer. This analyzer is frequently used in mapping the corneal topography during PRK (photorefractive keratectomy) and LASIK (laser in situ keratomileusis) procedures in shaping of the cornea to correct myopia. This new instrument and some preliminary clinical tests on one of us (RRA) showing the data reproducibility are described.

  2. New clinical instrument for the early detection of cataract using dynamic light scattering and corneal topography

    NASA Astrophysics Data System (ADS)

    Ansari, Rafat R.; Datiles, Manuel B., III; King, James F.

    2000-06-01

    A growing cataract can be detected at the molecular level using the technique of dynamic light scattering (DLS). However, the success of this method in clinical use depends upon the precise control of the scattering volume inside a patient's eye and especially during patient's repeat visits. This is important because the scattering volume (cross-over region between the scattered light and incident light) inside the eye in a high-quality DLS set-up is very small (few microns in dimension). This precise control holds the key for success in the longitudinal studies of cataract and during anti-cataract drug screening. We have circumvented these problems by fabricating a new DLS fiber optic probe with a working distance of 40 mm and by mounting it inside a cone of a corneal analyzer. This analyzer is frequently used in mapping the corneal topography during PRK (photorefractive keratectomy) and LASIK (laser in situ keratomileusis) procedures in shaping of the cornea to correct myopia. This new instrument and some preliminary clinical tests on one of us (RRA) showing the data reproducibility are described.

  3. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  4. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  5. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  6. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  7. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  8. Correction method for influence of tissue scattering for sidestream dark-field oximetry using multicolor LEDs

    NASA Astrophysics Data System (ADS)

    Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki

    2016-12-01

    We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.

  9. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  10. A technique for measuring oxygen saturation in biological tissues based on diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya

    2017-07-01

    A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.

  11. Comparison of coherent anti-Stokes Raman-scattering thermometry with thermocouple measurements and model predictions in both natural-gas and coal-dust flames.

    PubMed

    Lückerath, R; Woyde, M; Meier, W; Stricker, W; Schnell, U; Magel, H C; Görres, J; Spliethoff, H; Maier, H

    1995-06-20

    Mobile coherent anti-Stokes Raman-scattering equipment was applied for single-shot temperature measurements in a pilot-scale furnace with a thermal power of 300 kW, fueled with either natural gas or coal dust. Average temperatures deduced from N(2) coherent anti-Stokes Raman-scattering spectra were compared with thermocouple readings for identical flame conditions. There were evident differences between the results of both techniques, mainly in the case of the natural-gas flame. For the coal-dust flame, a strong influence of an incoherent and a coherent background, which led to remarkable changes in the spectral shape of the N(2)Q-branch spectra, was observed. Therefore an algorithm had to be developed to correct the coal-dust flame spectra before evaluation. The measured temperature profiles at two different planes in the furnace were compared with model calculations.

  12. Measurement of neutrino flux from neutrino-electron elastic scattering

    DOE PAGES

    Park, J.; Aliaga, L.; Altinok, O.; ...

    2016-06-10

    Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently, a measurement of this process in an accelerator-based ν μ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to ~10% due to uncertainties in hadron production and focusing. We also isolated a sample of 135±17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI ν μ fluxmore » from 9% to 6%. Finally, our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies.« less

  13. Measurement of neutrino flux from neutrino-electron elastic scattering

    NASA Astrophysics Data System (ADS)

    Park, J.; Aliaga, L.; Altinok, O.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Cai, T.; Carneiro, M. F.; Christy, M. E.; Chvojka, J.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Eberly, B.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Ghosh, A.; Golan, T.; Gran, R.; Harris, D. A.; Higuera, A.; Kleykamp, J.; Kordosky, M.; Le, T.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; Martinez Caicedo, D. A.; McFarland, K. S.; McGivern, C. L.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nuruzzaman; Osta, J.; Paolone, V.; Patrick, C. E.; Perdue, G. N.; Rakotondravohitra, L.; Ramirez, M. A.; Ray, H.; Ren, L.; Rimal, D.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Tagg, N.; Tice, B. G.; Valencia, E.; Walton, T.; Wolcott, J.; Wospakrik, M.; Zavala, G.; Zhang, D.; Miner ν A Collaboration

    2016-06-01

    Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently a measurement of this process in an accelerator-based νμ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to ˜10 % due to uncertainties in hadron production and focusing. We have isolated a sample of 135 ±17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI νμ flux from 9% to 6%. Our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies.

  14. Experimental evaluation of effective atomic number of composite materials using back-scattering of gamma photons

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.

    2017-04-01

    A method has been presented for calculation of effective atomic number (Zeff) of composite materials, by using back-scattering of 662 keV gamma photons obtained from a 137Cs mono-energetic radioactive source. The present technique is a non-destructive approach, and is employed to evaluate Zeff of different composite materials, by interacting gamma photons with semi-infinite material in a back-scattering geometry, using a 3″ × 3″ NaI(Tl) scintillation detector. The present work is undertaken to study the effect of target thickness on intensity distribution of gamma photons which are multiply back-scattered from targets (pure elements) and composites (mixtures of different elements). The intensity of multiply back-scattered events increases with increasing target thickness and finally saturates. The saturation thickness for multiply back-scattered events is used to assign a number (Zeff) for multi-element materials. Response function of the 3″ × 3″ NaI(Tl) scintillation detector is applied on observed pulse-height distribution to include the contribution of partially absorbed photons. The reduced value of signal-to-noise ratio interprets the increase in multiply back-scattered data of a response corrected spectrum. Data obtained from Monte Carlo simulations and literature also support the present experimental results.

  15. The Influence of Trace Gases Absorption on Differential Ring Cross Sections

    NASA Astrophysics Data System (ADS)

    Han, Dong; Zhao, Keyi

    2017-04-01

    The Ring effect refers to the filling in of Fraunhofer lines, which is known as solar absorption lines, caused almost entirely by rotational Raman scattering. The rotational Raman scattering by N2 and O2 in the atmosphere is the main factor that leads to Ring effect. The Ring effect is one significant limitation to the accuracy of the retrieval of trace gas constituents in atmosphere, while using satellite data with Differential Optical Absorption Spectroscopy technique. In this study, firstly the solar spectrum is convolved with rotational Raman cross sections of atmosphere, which is calculated with rotational Raman cross sections of N2 and O2, divided by the original solar spectrum, with a cubic polynomial subtracted off, to create differential Ring spectrum Ring1. Secondly, the Ring effect for pure Raman scattering of the Fraunhofer spectrum plus the contribution from interference by terrestrial absorption which always comes from a kind of trace gas (e.g., O3) are derived. To allow for more generality, the optically thin term as well as the next term in the expansion for the Beer-Lambert law are calculated.Ring1, Ring2, and Ring3are the Fraunhofer only, 1st terrestrial correction, and 2nd terrestrial correction for DOAS fitting.

  16. Rayleigh Scattering.

    ERIC Educational Resources Information Center

    Young, Andrew T.

    1982-01-01

    The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)

  17. Guide-star-based computational adaptive optics for broadband interferometric tomography

    PubMed Central

    Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.

    2012-01-01

    We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179

  18. Contrast enhanced imaging with a stationary digital breast tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Puett, Connor; Calliste, Jabari; Wu, Gongting; Inscoe, Christina R.; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2017-03-01

    Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.

  19. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.

    PubMed

    Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T

    2017-01-01

    Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.

  20. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI

    PubMed Central

    Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc

    2017-01-01

    Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656

  1. Split-probe hybrid femtosecond/picosecond rotational CARS for time-domain measurement of S-branch Raman linewidths within a single laser shot.

    PubMed

    Patterson, Brian D; Gao, Yi; Seeger, Thomas; Kliewer, Christopher J

    2013-11-15

    We introduce a multiplex technique for the single-laser-shot determination of S-branch Raman linewidths with high accuracy and precision by implementing hybrid femtosecond (fs)/picosecond (ps) rotational coherent anti-Stokes Raman spectroscopy (CARS) with multiple spatially and temporally separated probe beams derived from a single laser pulse. The probe beams scatter from the rotational coherence driven by the fs pump and Stokes pulses at four different probe pulse delay times spanning 360 ps, thereby mapping collisional coherence dephasing in time for the populated rotational levels. The probe beams scatter at different folded BOXCARS angles, yielding spatially separated CARS signals which are collected simultaneously on the charge coupled device camera. The technique yields a single-shot standard deviation (1σ) of less than 3.5% in the determination of Raman linewidths and the average linewidth values obtained for N(2) are within 1% of those previously reported. The presented technique opens the possibility for correcting CARS spectra for time-varying collisional environments in operando.

  2. Aerosol Optical Properties over the Oceans: Summary and Interpretation of Shadow-Band Radiometer Data from Six Cruises. Chapter 19

    NASA Technical Reports Server (NTRS)

    Miller, Mark A.; Reynolds, R. M.; Bartholomew, Mary Jane

    2001-01-01

    The aerosol scattering component of the total radiance measured at the detectors of ocean color satellites is determined with atmospheric correction algorithms. These algorithms are based on aerosol optical thickness measurements made in two channels that lie in the near-infrared portion of the electromagnetic spectrum. The aerosol properties in the near-infrared region are used because there is no significant contribution to the satellite-measured radiance from the underlying ocean surface in that spectral region. In the visible wavelength bands, the spectrum of radiation scattered from the turbid atmosphere is convolved with the spectrum of radiation scattered from the surface layers of the ocean. The radiance contribution made by aerosols in the visible bands is determined from the near-infrared measurements through the use of aerosol models and radiation transfer codes. Selection of appropriate aerosol models from the near-infrared measurements is a fundamental challenge. There are several challenges with respect to the development, improvement, and evaluation of satellite ocean-color atmospheric correction algorithms. A common thread among these challenges is the lack of over-ocean aerosol data. Until recently, one of the most important limitations has been the lack of techniques and instruments to make aerosol measurements at sea. There has been steady progress in this area over the past five years, and there are several new and promising devices and techniques for data collection. The development of new instruments and the collection of more aerosol data from over the world's oceans have brought the realization that aerosol measurements that can be directly compared with aerosol measurements from ocean color satellite measurements are difficult to obtain. There are two problems that limit these types of comparisons: the cloudiness of the atmosphere over the world's oceans and the limitations of the techniques and instruments used to collect aerosol data from ships. To address the latter, we have developed a new type of shipboard sun photometer.

  3. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia

    NASA Astrophysics Data System (ADS)

    Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  4. Segmentation-free empirical beam hardening correction for CT.

    PubMed

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.

  5. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less

  6. Scattering analysis of LOFAR pulsar observations

    NASA Astrophysics Data System (ADS)

    Geyer, M.; Karastergiou, A.; Kondratiev, V. I.; Zagkouris, K.; Kramer, M.; Stappers, B. W.; Grießmeier, J.-M.; Hessels, J. W. T.; Michilli, D.; Pilia, M.; Sobey, C.

    2017-09-01

    We measure the effects of interstellar scattering on average pulse profiles from 13 radio pulsars with simple pulse shapes. We use data from the LOFAR High Band Antennas, at frequencies between 110 and 190 MHz. We apply a forward fitting technique, and simultaneously determine the intrinsic pulse shape, assuming single Gaussian component profiles. We find that the constant τ, associated with scattering by a single thin screen, has a power-law dependence on frequency τ ∝ ν-α, with indices ranging from α = 1.50 to 4.0, despite simplest theoretical models predicting α = 4.0 or 4.4. Modelling the screen as an isotropic or extremely anisotropic scatterer, we find anisotropic scattering fits lead to larger power-law indices, often in better agreement with theoretically expected values. We compare the scattering models based on the inferred, frequency-dependent parameters of the intrinsic pulse, and the resulting correction to the dispersion measure (DM). We highlight the cases in which fits of extreme anisotropic scattering are appealing, while stressing that the data do not strictly favour either model for any of the 13 pulsars. The pulsars show anomalous scattering properties that are consistent with finite scattering screens and/or anisotropy, but these data alone do not provide the means for an unambiguous characterization of the screens. We revisit the empirical τ versus DM relation and consider how our results support a frequency dependence of α. Very long baseline interferometry, and observations of the scattering and scintillation properties of these sources at higher frequencies, will provide further evidence.

  7. LOCSET Phase Locking: Operation, Diagnostics, and Applications

    NASA Astrophysics Data System (ADS)

    Pulford, Benjamin N.

    The aim of this dissertation is to discuss the theoretical and experimental work recently done with the Locking of Optical Coherence via Single-detector Electronic-frequency Tagging (LOCSET) phase locking technique developed and employed here are AFRL. The primary objectives of this effort are to detail the fundamental operation of the LOCSET phase locking technique, recognize the conditions in which the LOCSET control electronics optimally operate, demonstrate LOCSET phase locking with higher channel counts than ever before, and extend the LOCSET technique to correct for low order, atmospherically induced, phase aberrations introduced to the output of a tiled array of coherently combinable beams. The experimental work performed for this effort resulted in the coherent combination of 32 low power optical beams operating with unprecedented LOCSET phase error performance of lambda/71 RMS in a local loop beam combination configuration. The LOCSET phase locking technique was also successfully extended, for the first time, into an Object In the Loop (OIL) configuration by utilizing light scattered off of a remote object as the optical return signal for the LOCSET phase control electronics. Said LOCSET-OIL technique is capable of correcting for low order phase aberrations caused by atmospheric turbulence disturbances applied across a tiled array output.

  8. Scatter correction for x-ray conebeam CT using one-dimensional primary modulation

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca

    2009-02-01

    Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.

  9. SU-E-J-10: A Moving-Blocker-Based Strategy for Simultaneous Megavoltage and Kilovoltage Scatter Correction in Cone-Beam Computed Tomography Image Acquired During Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Lee, H; Wang, J

    2014-06-01

    Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less

  10. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  11. Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals

    NASA Technical Reports Server (NTRS)

    Wang, Meng-Hua; King, Michael D.

    1997-01-01

    We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.

  12. Validation of a Multimodality Flow Phantom and Its Application for Assessment of Dynamic SPECT and PET Technologies.

    PubMed

    Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran

    2017-01-01

    Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.

  13. Rayleigh, Compton and K-shell radiative resonant Raman scattering in 83Bi for 88.034 keV γ-rays

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sharma, Veena; Mehta, D.; Singh, Nirmal

    2007-11-01

    The Rayleigh, Compton and K-shell radiative resonant Raman scattering cross-sections for the 88.034 keV γ-rays have been measured in the 83Bi (K-shell binding energy = 90.526 keV) element. The measurements have been performed at 130° scattering angle using reflection-mode geometrical arrangement involving the 109Cd radioisotope as photon source and an LEGe detector. Computer simulations were exercised to determine distributions of the incident and emission angles, which were further used in evaluation of the absorption corrections for the incident and emitted photons in the target. The measured cross-sections for the Rayleigh scattering are compared with the modified form-factors (MFs) corrected for the anomalous-scattering factors (ASFs) and the S-matrix calculations; and those for the Compton scattering are compared with the Klein-Nishina cross-sections corrected for the non-relativistic Hartree-Fock incoherent scattering function S(x, Z). The ratios of the measured KL2, KL3, KM and KN2,3 radiative resonant Raman scattering cross-sections are found to be in general agreement with those of the corresponding measured fluorescence transition probabilities.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, P; Timmins, R; Wells, R G

    Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less

  15. Unsupervised Classification of PolSAR Data Using a Scattering Similarity Measure Derived From a Geodesic Distance

    NASA Astrophysics Data System (ADS)

    Ratha, Debanshu; Bhattacharya, Avik; Frery, Alejandro C.

    2018-01-01

    In this letter, we propose a novel technique for obtaining scattering components from Polarimetric Synthetic Aperture Radar (PolSAR) data using the geodesic distance on the unit sphere. This geodesic distance is obtained between an elementary target and the observed Kennaugh matrix, and it is further utilized to compute a similarity measure between scattering mechanisms. The normalized similarity measure for each elementary target is then modulated with the total scattering power (Span). This measure is used to categorize pixels into three categories i.e. odd-bounce, double-bounce and volume, depending on which of the above scattering mechanisms dominate. Then the maximum likelihood classifier of [J.-S. Lee, M. R. Grunes, E. Pottier, and L. Ferro-Famil, Unsupervised terrain classification preserving polarimetric scattering characteristics, IEEE Trans. Geos. Rem. Sens., vol. 42, no. 4, pp. 722731, April 2004.] based on the complex Wishart distribution is iteratively used for each category. Dominant scattering mechanisms are thus preserved in this classification scheme. We show results for L-band AIRSAR and ALOS-2 datasets acquired over San Francisco and Mumbai, respectively. The scattering mechanisms are better preserved using the proposed methodology than the unsupervised classification results using the Freeman-Durden scattering powers on an orientation angle (OA) corrected PolSAR image. Furthermore, (1) the scattering similarity is a completely non-negative quantity unlike the negative powers that might occur in double- bounce and odd-bounce scattering component under Freeman Durden decomposition (FDD), and (2) the methodology can be extended to more canonical targets as well as for bistatic scattering.

  16. Library based x-ray scatter correction for dedicated cone beam breast CT

    PubMed Central

    Shi, Linxi; Karellas, Andrew; Zhu, Lei

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870

  17. Forward scattering effects on muon imaging

    NASA Astrophysics Data System (ADS)

    Gómez, H.; Gibert, D.; Goy, C.; Jourde, K.; Karyotakis, Y.; Katsanevas, S.; Marteau, J.; Rosas-Carbajal, M.; Tonazzo, A.

    2017-12-01

    Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure.

  18. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    NASA Astrophysics Data System (ADS)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  19. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    NASA Astrophysics Data System (ADS)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  20. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  1. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  2. Characterization and correction of cupping effect artefacts in cone beam CT

    PubMed Central

    Hunter, AK; McDavid, WD

    2012-01-01

    Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754

  3. Review of quantitative ultrasound: envelope statistics and backscatter coefficient imaging and contributions to diagnostic ultrasound

    PubMed Central

    Oelze, Michael L.; Mamou, Jonathan

    2017-01-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606

  4. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  5. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  6. An investigation of light transport through scattering bodies with non-scattering regions.

    PubMed

    Firbank, M; Arridge, S R; Schweiger, M; Delpy, D T

    1996-04-01

    Near-infra-red (NIR) spectroscopy is increasingly being used for monitoring cerebral oxygenation and haemodynamics. One current concern is the effect of the clear cerebrospinal fluid upon the distribution of light in the head. There are difficulties in modelling clear layers in scattering systems. The Monte Carlo model should handle clear regions accurately, but is too slow to be used for realistic geometries. The diffusion equation can be solved quickly for realistic geometries, but is only valid in scattering regions. In this paper we describe experiments carried out on a solid slab phantom to investigate the effect of clear regions. The experimental results were compared with the different models of light propagation. We found that the presence of a clear layer had a significant effect upon the light distribution, which was modelled correctly by Monte Carlo techniques, but not by diffusion theory. A novel approach to calculating the light transport was developed, using diffusion theory to analyze the scattering regions combined with a radiosity approach to analyze the propagation through the clear region. Results from this approach were found to agree with both the Monte Carlo and experimental data.

  7. Variationally consistent approximation scheme for charge transfer

    NASA Technical Reports Server (NTRS)

    Halpern, A. M.

    1978-01-01

    The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.

  8. A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.

    PubMed

    Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R

    2016-10-01

    In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.

  9. Correction methods for underwater turbulence degraded imaging

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Hou, W.; Restaino, S. R.; Matt, S.; Gładysz, S.

    2014-10-01

    The use of remote sensing techniques such as adaptive optics and image restoration post processing to correct for aberrations in a wavefront of light propagating through turbulent environment has become customary for many areas including astronomy, medical imaging, and industrial applications. EO imaging underwater has been mainly concentrated on overcoming scattering effects rather than dealing with underwater turbulence. However, the effects of turbulence have crucial impact over long image-transmission ranges and under extreme turbulence conditions become important over path length of a few feet. Our group has developed a program that attempts to define under which circumstances application of atmospheric remote sensing techniques could be envisioned. In our experiments we employ the NRL Rayleigh-Bénard convection tank for simulated turbulence environment at Stennis Space Center, MS. A 5m long water tank is equipped with heating and cooling plates that generate a well measured thermal gradient that in turn produces various degrees of turbulence. The image or laser beam spot can be propagated along the tank's length where it is distorted by induced turbulence. In this work we report on the experimental and theoretical findings of the ongoing program. The paper will introduce the experimental setup, the techniques used, and the measurements made as well as describe novel methods for postprocessing and correction of images degraded by underwater turbulence.

  10. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, G; Feng, Z; Yin, Y

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator.more » The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less

  11. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    PubMed

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  12. Scatter correction, intermediate view estimation and dose characterization in megavoltage cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Sramek, Benjamin Koerner

    The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.

  13. Conjugate adaptive optics with remote focusing in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel

    2018-02-01

    The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.

  14. Comparison of observation level versus 24-hour average atmospheric loading corrections in VLBI analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; van Dam, T. M.

    2009-04-01

    Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?

  15. Interplay of threshold resummation and hadron mass corrections in deep inelastic processes

    DOE PAGES

    Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix

    2015-02-01

    We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less

  16. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  17. The Recovery of Optical Quality after Laser Vision Correction

    PubMed Central

    Jung, Hyeong-Gi

    2013-01-01

    Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570

  18. Fully relativistic form factor for Thomson scattering.

    PubMed

    Palastro, J P; Ross, J S; Pollock, B; Divol, L; Froula, D H; Glenzer, S H

    2010-03-01

    We derive a fully relativistic form factor for Thomson scattering in unmagnetized plasmas valid to all orders in the normalized electron velocity, beta[over ]=v[over ]/c. The form factor is compared to a previously derived expression where the lowest order electron velocity, beta[over], corrections are included [J. Sheffield, (Academic Press, New York, 1975)]. The beta[over ] expansion approach is sufficient for electrostatic waves with small phase velocities such as ion-acoustic waves, but for electron-plasma waves the phase velocities can be near luminal. At high phase velocities, the electron motion acquires relativistic corrections including effective electron mass, relative motion of the electrons and electromagnetic wave, and polarization rotation. These relativistic corrections alter the scattered emission of thermal plasma waves, which manifest as changes in both the peak power and width of the observed Thomson-scattered spectra.

  19. Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.

    PubMed

    Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein

    2018-02-01

    A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Reconstructive correction of aberrations in nuclear particle spectrographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berz, M.; Joh, K.; Nolen, J.A.

    A method is presented that allows the reconstruction of trajectories in particle spectrographs and the reconstructive correction of residual aberrations that otherwise limit the resolution. Using a computed or fitted high order transfer map that describes the uncorrected aberrations of the spectrograph, it is possible to calculate a map via an analytic recursion relation that allows the computation of the corrected data of interest such as reaction energy and scattering angle as well as the reconstructed trajectories in terms of position measurements in two planes near the focal plane. The technique is only limited by the accuracy of the positionmore » measurements, the incoherent spot sizes, and the accuracy of the transfer map. In practice the method can be expressed as an inversion of a nonlinear map and implemented in the differential algebraic framework. The method is applied to correct residual aberrations in the S800 spectrograph which is under construction at the National Superconducting Cyclotron Laboratory at Michigan State University and to two other high resolution spectrographs.« less

  1. Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion

    NASA Technical Reports Server (NTRS)

    Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.

  2. Simultaneous 99mtc/111in spect reconstruction using accelerated convolution-based forced detection monte carlo

    NASA Astrophysics Data System (ADS)

    Karamat, Muhammad I.; Farncombe, Troy H.

    2015-10-01

    Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.

  3. Radiative corrections to elastic proton-electron scattering measured in coincidence

    NASA Astrophysics Data System (ADS)

    Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.

    2017-05-01

    The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.

  4. Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers

    NASA Astrophysics Data System (ADS)

    Lien, J.; Zebker, H. A.

    2013-12-01

    Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.

  5. Broadband true time delay for microwave signal processing, using slow light based on stimulated Brillouin scattering in optical fibers.

    PubMed

    Chin, Sanghoon; Thévenaz, Luc; Sancho, Juan; Sales, Salvador; Capmany, José; Berger, Perrine; Bourderionnet, Jérôme; Dolfi, Daniel

    2010-10-11

    We experimentally demonstrate a novel technique to process broadband microwave signals, using all-optically tunable true time delay in optical fibers. The configuration to achieve true time delay basically consists of two main stages: photonic RF phase shifter and slow light, based on stimulated Brillouin scattering in fibers. Dispersion properties of fibers are controlled, separately at optical carrier frequency and in the vicinity of microwave signal bandwidth. This way time delay induced within the signal bandwidth can be manipulated to correctly act as true time delay with a proper phase compensation introduced to the optical carrier. We completely analyzed the generated true time delay as a promising solution to feed phased array antenna for radar systems and to develop dynamically reconfigurable microwave photonic filters.

  6. Optical characterization of pancreatic normal and tumor tissues with double integrating sphere system

    NASA Astrophysics Data System (ADS)

    Kiris, Tugba; Akbulut, Saadet; Kiris, Aysenur; Gucin, Zuhal; Karatepe, Oguzhan; Bölükbasi Ates, Gamze; Tabakoǧlu, Haşim Özgür

    2015-03-01

    In order to develop minimally invasive, fast and precise diagnostic and therapeutic methods in medicine by using optical methods, first step is to examine how the light propagates, scatters and transmitted through medium. So as to find out appropriate wavelengths, it is required to correctly determine the optical properties of tissues. The aim of this study is to measure the optical properties of both cancerous and normal ex-vivo pancreatic tissues. Results will be compared to detect how cancerous and normal tissues respond to different wavelengths. Double-integrating-sphere system and computational technique inverse adding doubling method (IAD) were used in the study. Absorption and reduced scattering coefficients of normal and cancerous pancreatic tissues have been measured within the range of 500-650 nm. Statistical significant differences between cancerous and normal tissues have been obtained at 550 nm and 630 nm for absorption coefficients. On the other hand; there were no statistical difference found for scattering coefficients at any wavelength.

  7. Elastic and inelastic scattering of neutrons from 56Fe

    NASA Astrophysics Data System (ADS)

    Ramirez, Anthony Paul; McEllistrem, M. T.; Liu, S. H.; Mukhopadhyay, S.; Peters, E. E.; Yates, S. W.; Vanhoy, J. R.; Harrison, T. D.; Rice, B. G.; Thompson, B. K.; Hicks, S. F.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.

    2015-10-01

    The differential cross sections for elastic and inelastic scattered neutrons from 56Fe have been measured at the University of Kentucky Accelerator Laboratory (www.pa.uky.edu/accelerator) for incident neutron energies between 2.0 and 8.0 MeV and for the angular range 30° to 150°. Time-of-flight techniques and pulse-shape discrimination were employed for enhancing the neutron energy spectra and for reducing background. An overview of the experimental procedures and data analysis for the conversion of neutron yields to differential cross sections will be presented. These include the determination of the energy-dependent detection efficiencies, the normalization of the measured differential cross sections, and the attenuation and multiple scattering corrections. Our results will also be compared to evaluated cross section databases and reaction model calculations using the TALYS code. This work is supported by grants from the U.S. Department of Energy-Nuclear Energy Universities Program: NU-12-KY-UK-0201-05, and the Donald A. Cowan Physics Institute at the University of Dallas.

  8. Dual-energy fluorescent x-ray computed tomography system with a pinhole design: Use of K-edge discontinuity for scatter correction

    PubMed Central

    Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya

    2017-01-01

    We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496

  9. Multiple Volume Scattering in Random Media and Periodic Structures with Applications in Microwave Remote Sensing and Wave Functional Materials

    NASA Astrophysics Data System (ADS)

    Tan, Shurun

    The objective of my research is two-fold: to study wave scattering phenomena in dense volumetric random media and in periodic wave functional materials. For the first part, the goal is to use the microwave remote sensing technique to monitor water resources and global climate change. Towards this goal, I study the microwave scattering behavior of snow and ice sheet. For snowpack scattering, I have extended the traditional dense media radiative transfer (DMRT) approach to include cyclical corrections that give rise to backscattering enhancements, enabling the theory to model combined active and passive observations of snowpack using the same set of physical parameters. Besides DMRT, a fully coherent approach is also developed by solving Maxwell's equations directly over the entire snowpack including a bottom half space. This revolutionary new approach produces consistent scattering and emission results, and demonstrates backscattering enhancements and coherent layer effects. The birefringence in anisotropic snow layers is also analyzed by numerically solving Maxwell's equation directly. The effects of rapid density fluctuations in polar ice sheet emission in the 0.5˜2.0 GHz spectrum are examined using both fully coherent and partially coherent layered media emission theories that agree with each other and distinct from incoherent approaches. For the second part, the goal is to develop integral equation based methods to solve wave scattering in periodic structures such as photonic crystals and metamaterials that can be used for broadband simulations. Set upon the concept of modal expansion of the periodic Green's function, we have developed the method of broadband Green's function with low wavenumber extraction (BBGFL), where a low wavenumber component is extracted and results a non-singular and fast-converging remaining part with simple wavenumber dependence. We've applied the technique to simulate band diagrams and modal solutions of periodic structures, and to construct broadband Green's functions including periodic scatterers.

  10. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  11. [Development of a Striatal and Skull Phantom for Quantitative 123I-FP-CIT SPECT].

    PubMed

    Ishiguro, Masanobu; Uno, Masaki; Miyazaki, Takuma; Kataoka, Yumi; Toyama, Hiroshi; Ichihara, Takashi

    123 Iodine-labelled N-(3-fluoropropyl) -2β-carbomethoxy-3β-(4-iodophenyl) nortropane ( 123 I-FP-CIT) single photon emission computerized tomography (SPECT) images are used for differential diagnosis such as Parkinson's disease (PD). Specific binding ratio (SBR) is affected by scattering and attenuation in SPECT imaging, because gender and age lead to changes in skull density. It is necessary to clarify and correct the influence of the phantom simulating the the skull. The purpose of this study was to develop phantoms that can evaluate scattering and attenuation correction. Skull phantoms were prepared based on the measuring the results of the average computed tomography (CT) value, average skull thickness of 12 males and 16 females. 123 I-FP-CIT SPECT imaging of striatal phantom was performed with these skull phantoms, which reproduced normal and PD. SPECT images, were reconstructed with scattering and attenuation correction. SBR with partial volume effect corrected (SBR act ) and conventional SBR (SBR Bolt ) were measured and compared. The striatum and the skull phantoms along with 123 I-FP-CIT were able to reproduce the normal accumulation and disease state of PD and further those reproduced the influence of skull density on SPECT imaging. The error rate with the true SBR, SBR act was much smaller than SBR Bolt . The effect on SBR could be corrected by scattering and attenuation correction even if the skull density changes with 123 I-FP-CIT on SPECT imaging. The combination of triple energy window method and CT-attenuation correction method would be the best correction method for SBR act .

  12. Review of Quantitative Ultrasound: Envelope Statistics and Backscatter Coefficient Imaging and Contributions to Diagnostic Ultrasound.

    PubMed

    Oelze, Michael L; Mamou, Jonathan

    2016-02-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.

  13. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    NASA Astrophysics Data System (ADS)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  14. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  15. Superconducting fluctuations at arbitrary disorder strength

    NASA Astrophysics Data System (ADS)

    Stepanov, Nikolai A.; Skvortsov, Mikhail A.

    2018-04-01

    We study the effect of superconducting fluctuations on the conductivity of metals at arbitrary temperatures T and impurity scattering rates τ-1. Using the standard diagrammatic technique but in the Keldysh representation, we derive the general expression for the fluctuation correction to the dc conductivity applicable for any space dimensionality and analyze it in the case of the film geometry. We observe that the usual classification in terms of the Aslamazov-Larkin, Maki-Thompson, and density-of-states diagrams is to some extent artificial since these contributions produce similar terms, which partially cancel each other. In the diffusive limit, our results fully coincide with recent calculations in the Keldysh technique. In the ballistic limit near the transition, we demonstrate the absence of a divergent term (Tτ ) 2 attributed previously to the density-of-states contribution. In the ballistic limit far above the transition, the temperature-dependent part of the conductivity correction is shown to grow as T τ /ln(T /Tc) , where Tc is the critical temperature.

  16. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  17. Ambiguities in model-independent partial-wave analysis

    NASA Astrophysics Data System (ADS)

    Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.

    2018-06-01

    Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.

  18. Non-cancellation of electroweak logarithms in high-energy scattering

    DOE PAGES

    Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...

    2015-01-01

    We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.

  19. Computational adaptive optics for broadband optical interferometric tomography of biological tissue

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.

    2015-03-01

    High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.

  20. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  1. Quadratic electroweak corrections for polarized Moller scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov

    2012-01-01

    The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.

  2. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  3. Total internal reflection and dynamic light scattering microscopy of gels

    NASA Astrophysics Data System (ADS)

    Gregor, Brian F.

    Two different techniques which apply optical microscopy in novel ways to the study of biological systems and materials were built and applied to several samples. The first is a system for adapting the well-known technique of dynamic light scattering (DLS) to an optical microscope. This can detect and scatter light from very small volumes, as compared to standard DLS which studies light scattering from volumes 1000x larger. The small scattering volume also allows for the observation of nonergodic dynamics in appropriate samples. Porcine gastric mucin (PGM) forms a gel at low pH which lines the epithelial cell layer and acts as a protective barrier against the acidic stomach environment. The dynamics and microscopic viscosity of PGM at different pH levels is studied using polystyrene microspheres as tracer particles. The microscopic viscosity and microrheological properties of the commercial basement membrane Matrigel are also studied with this instrument. Matrigel is frequently used to culture cells and its properties remain poorly determined. Well-characterized and purely synthetic Matrigel substitutes will need to have the correct rheological and morphological characteristics. The second instrument designed and built is a microscope which uses an interferometry technique to achieve an improvement in resolution 2.5x better in one dimension than the Abbe diffraction limit. The technique is based upon the interference of the evanescent field generated on the surface of a prism by a laser in a total internal reflection geometry. The enhanced resolution is demonstrated with fluorescent samples. Additionally. Raman imaging microscopy is demonstrated using the evanescent field in resonant and non-resonant samples, although attempts at applying the enhanced resolution technique to the Raman images were ultimately unsuccessful. Applications of this instrument include high resolution imaging of cell membranes and macroscopic structures in gels and proteins. Finally, a third section incorporating previous research on simulations of complex fluids is included. Two dimensional simulations of oil, water, and surfactant mixtures were computed with a lattice gas method. The simulated systems were randomly mixed and then the temperature was quenched to a predetermined point. Spontaneous micellization is observed for a narrow range of temperature quenches, and the overall growth rate of macroscopic structure is found to follow a Vogel-Fulcher growth law.

  4. Library based x-ray scatter correction for dedicated cone beam breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less

  5. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  6. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  7. Generalized model screening potentials for Fermi-Dirac plasmas

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2016-04-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  8. Quark-hadron duality constraints on $$\\gamma Z$$ box corrections to parity-violating elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Nathan L.; Blunden, Peter G.; Melnitchouk, Wally

    2015-12-08

    We examine the interference \\gamma Z box corrections to parity-violating elastic electron--proton scattering in the light of the recent observation of quark-hadron duality in parity-violating deep-inelastic scattering from the deuteron, and the approximate isospin independence of duality in the electromagnetic nucleon structure functions down to Q 2 \\approx 1 GeV 2. Assuming that a similar behavior also holds for the \\gamma Z proton structure functions, we find that duality constrains the γ Z box correction to the proton's weak charge to be Re V γ Z V = (5.4 \\pm 0.4) \\times 10 -3 at the kinematics of the Qmore » weak experiment. Within the same model we also provide estimates of the γ Z corrections for future parity-violating experiments, such as MOLLER at Jefferson Lab and MESA at Mainz.« less

  9. Retrieval of background surface reflectance with BRD components from pre-running BRDF

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo

    2016-10-01

    Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.

  10. Novel auto-correction method in a fiber-optic distributed-temperature sensor using reflected anti-Stokes Raman scattering.

    PubMed

    Hwang, Dusun; Yoon, Dong-Jin; Kwon, Il-Bum; Seo, Dae-Cheol; Chung, Youngjoo

    2010-05-10

    A novel method for auto-correction of fiber optic distributed temperature sensor using anti-Stokes Raman back-scattering and its reflected signal is presented. This method processes two parts of measured signal. One part is the normal back scattered anti-Stokes signal and the other part is the reflected signal which eliminate not only the effect of local losses due to the micro-bending or damages on fiber but also the differential attenuation. Because the beams of the same wavelength are used to cancel out the local variance in transmission medium there is no differential attenuation inherently. The auto correction concept was verified by the bending experiment on different bending points. (c) 2010 Optical Society of America.

  11. Elastic and Inelastic Scattering of Neutrons using a CLYC array

    NASA Astrophysics Data System (ADS)

    Brown, Tristan; Doucet, E.; Chowdhury, P.; Lister, C. J.; Wilson, G. L.; Devlin, M.; Mosby, S.

    2015-10-01

    CLYC scintillators, which have dual neutron and gamma response, have recently ushered in the possibility of fast neutron spectroscopy without time-of-flight (TOF). A 16-element array of 1'' x 1'' 6Li-depleted CLYC crystals, where pulse-shape-discrimination is achieved via digital pulse processing, has been commissioned at UMass Lowell. In an experiment at LANSCE, high energy neutrons were used to bombard 56Fe and 238U targets, in order to measure elastic and inelastic neutron scattering cross sections as a function of energy and angle with the array. The array is placed very close to the targets for enhanced geometrical solid angles for scattered neutrons compared to standard neutron-TOF measurements. A pulse-height spectrum of scattered neutrons in the detectors is compared to the energy of the incident neutrons, which is measured via the TOF of the pulsed neutrons from the source to the detectors. Recoil corrections are necessary to combine the energy spectra from all the detectors to obtain angle-integrated elastic and inelastic cross-sections. The detection techniques, analysis procedures and results will be presented. Supported by NNSA-SSAA program through DOE Grant DE-NA00013008.

  12. Performance studies towards a TOF-PET sensor using Compton scattering at plastic scintillators

    NASA Astrophysics Data System (ADS)

    Kuramoto, M.; Nakamori, T.; Gunji, S.; Kamada, K.; Shoji, Y.; Yoshikawa, A.; Aoki, T.

    2018-01-01

    We have developed a sensor head for a time-of-flight (TOF) PET scanner using plastic scintillators that have a very fast timing property. Given the very small cross section of photoelectric absorption in plastic scintillators at 511 keV, we use Compton scattering in order to compensate for detection efficiency. The detector will consist of two layers of scatterers and absorbers which are made of plastic and inorganic scintillators such as GAGG:Ce, respectively. Signals are read by monolithic Multi Pixel Photon Counters, and with energy deposits and interaction time stamps are being acquired. The scintillators are built to be capable of resolving interaction position in three dimensions, so that our system has also a function of depth-of-interaction (DOI) PET scanners. TOF resolution of ~ 200 ps (FWHM) is achieved in both cases of using the leading-edge discriminator and time-walk correction and using a configuration sensitive to DOI. Both the position resolution and spectroscopy are demonstrated using the prototype data acquisition system, with Compton scattering events subsequently being obtained. We also demonstrated that the background rejection technique using the Compton cone constraint could be valid with our system.

  13. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    PubMed

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  14. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  15. A technique for correcting ERTS data for solar and atmospheric effects. [Michigan test site

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Peacock, K.; Shah, N. J.

    1974-01-01

    The author has identified the following significant results. Based on processing ERTS CCTs and ground truth measurements collected on Michigan test site for January through June 1973 the following results are reported: (1) atmospheric transmittance varies from: 70 to 85% in band 4, 77 to 90% in band 5, 80 to 94% in band 6, and 84 to 97% in band 7 for one air mass; (2) a simple technique was established to determine atmospheric scattering seen by ERTS-1 from ground-based measurements of sky radiance. For March this scattering was found to be equivalent to that produced by a target having a reflectance of 11% in band 4, 5% in band 5, 3% in band 6, and 1% in band 7; (3) computer ability to classify targets under various atmospheric conditions was determined. Classification accuracy on some targets (i.e. bare soil, tended grass, etc.) hold up even under the most severe atmospheres encountered, while performance on other targets (trees, urban, rangeland, etc.) degrades rapidly when atmospheric conditions change by the smallest amount.

  16. Mid-infrared laser-absorption diagnostic for vapor-phase measurements in an evaporating n-decane aerosol

    NASA Astrophysics Data System (ADS)

    Porter, J. M.; Jeffries, J. B.; Hanson, R. K.

    2009-09-01

    A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.

  17. Ambient dose equivalent and effective dose from scattered x-ray spectra in mammography for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations.

    PubMed

    Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S

    2006-04-21

    In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.

  18. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  19. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  20. Whole-angle spherical retroreflector using concentric layers of homogeneous optical media.

    PubMed

    Oakley, John P

    2007-03-01

    Spherical retroreflectors have a much greater acceptance angle than conventional retroreflectors such as corner cubes. However, the optical performance of known spherical reflectors is limited by spherical aberration. It is shown that third-order spherical aberration may be corrected by using two or more layers of homogeneous optical media of different refractive indices. The performance of the retroreflector is characterized by the scattering (or radar) cross section, which is calculated by using optical design software. A practical spherical reflector is described that offers a significant increase in optical performance over existing devices. No gradient index components are required, and the device is constructed by using conventional optical materials and fabrication techniques. The experimental results confirm that the device operates correctly at the design wavelength of 690 nm.

  1. A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from Trees

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from...longer needed. Do not return it to the originator. ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique...DD-MM-YYYY) September 2016 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 2015–2016 4. TITLE AND SUBTITLE A Discrete Scatterer

  2. Arbitrary shape region-of-interest fluoroscopy system

    NASA Astrophysics Data System (ADS)

    Xu, Tong; Le, Huy; Molloi, Sabee Y.

    2002-05-01

    Region-of-interest (ROI) fluoroscopy has previously been investigated as a method to reduce x-ray exposure to the patient and the operator. This ROI fluoroscopy technique allows the operator to arbitrarily determine the shape, size, and location of the ROI. A device was used to generate patient specific x-ray beam filters. The device is comprised of 18 step-motors that control a 16 X 16 matrix of pistons to form the filter from a deformable attenuating material. Patient exposure reductions were measured to be 84 percent for a 65 kVp beam. Operator exposure reduction was measured to be 69 percent. Due to the reduced x-ray scatter, image contrast was improved by 23 percent inside the ROI. The reduced gray level in the periphery was corrected using an experimentally determined compensation ratio. A running average interpolation technique was used to eliminate the artifacts from the ROI edge. As expected, the final corrected images show increased noise in the periphery. However, the anatomical structures in the periphery could still be visualized. This arbitrary shaped region of interest fluoroscopic technique was shown to be effective in terms of its ability to reduce patient and operator exposure without significant reduction in image quality. The ability to define an arbitrary shaped ROI should make the technique more clinically feasible.

  3. Ground-based determination of atmospheric radiance for correction of ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Peacock, K.

    1974-01-01

    A technique is described for estimating the atmospheric radiance observed by a downward sensor (ERTS) using ground-based measurements. A formula is obtained for the sky radiance at the time of the ERTS overpass from the radiometric measurement of the sky radiance made at a particular solar zenith angle and air mass. A graph illustrates ground-based sky radiance measurements as a function of the scattering angle for a range of solar air masses. Typical values for sky radiance at a solar zenith angle of 48 degrees are given.

  4. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    NASA Astrophysics Data System (ADS)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  5. Scaffolded DNA origami of a DNA tetrahedron molecular container.

    PubMed

    Ke, Yonggang; Sharma, Jaswinder; Liu, Minghui; Jahn, Kasper; Liu, Yan; Yan, Hao

    2009-06-01

    We describe a strategy of scaffolded DNA origami to design and construct 3D molecular cages of tetrahedron geometry with inside volume closed by triangular faces. Each edge of the triangular face is approximately 54 nm in dimension. The estimated total external volume and the internal cavity of the triangular pyramid are about 1.8 x 10(-23) and 1.5 x 10(-23) m(3), respectively. Correct formation of the tetrahedron DNA cage was verified by gel electrophoresis, atomic force microscopy, transmission electron microscopy, and dynamic light scattering techniques.

  6. Spectral peculiarities of electromagnetic wave scattering by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2006-03-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  7. Spectral peculiarities of electromagnetic wave scattered by Veselago's cylinders

    NASA Astrophysics Data System (ADS)

    Sukhov, S. V.; Shevyakhov, N. S.

    2005-09-01

    The results are presented of spectral calculations of extinction cross-section for scattering of E- and H-polarized electromagnetic waves by cylinders made of Veselago material. The insolvency of previously developed models of scattering is demonstrated. It is shown that correct description of scattering requires separate consideration of both electric and magnetic subsystems.

  8. Influence of local-field corrections on Thomson scattering in collision-dominated two-component plasmas.

    PubMed

    Fortmann, Carsten; Wierling, August; Röpke, Gerd

    2010-02-01

    The dynamic structure factor, which determines the Thomson scattering spectrum, is calculated via an extended Mermin approach. It incorporates the dynamical collision frequency as well as the local-field correction factor. This allows to study systematically the impact of electron-ion collisions as well as electron-electron correlations due to degeneracy and short-range interaction on the characteristics of the Thomson scattering signal. As such, the plasmon dispersion and damping width is calculated for a two-component plasma, where the electron subsystem is completely degenerate. Strong deviations of the plasmon resonance position due to the electron-electron correlations are observed at increasing Brueckner parameters r(s). These results are of paramount importance for the interpretation of collective Thomson scattering spectra, as the determination of the free electron density from the plasmon resonance position requires a precise theory of the plasmon dispersion. Implications due to different approximations for the electron-electron correlation, i.e., different forms of the one-component local-field correction, are discussed.

  9. Assessment of second- and third-order ionospheric effects on regional networks: case study in China with longer CMONOC GPS coordinate time series

    NASA Astrophysics Data System (ADS)

    Deng, Liansheng; Jiang, Weiping; Li, Zhao; Chen, Hua; Wang, Kaihua; Ma, Yifang

    2017-02-01

    Higher-order ionospheric (HOI) delays are one of the principal technique-specific error sources in precise global positioning system analysis and have been proposed to become a standard part of precise GPS data processing. In this research, we apply HOI delay corrections to the Crustal Movement Observation Network of China's (CMONOC) data processing (from January 2000 to December 2013) and furnish quantitative results for the effects of HOI on CMONOC coordinate time series. The results for both a regional reference frame and global reference frame are analyzed and compared to clarify the HOI effects on the CMONOC network. We find that HOI corrections can effectively reduce the semi-annual signals in the northern and vertical components. For sites with lower semi-annual amplitudes, the average decrease in magnitude can reach 30 and 10 % for the northern and vertical components, respectively. The noise amplitudes with HOI corrections and those without HOI corrections are not significantly different. Generally, the HOI effects on CMONOC networks in a global reference frame are less obvious than the results in the regional reference frame, probably because the HOI-induced errors are smaller in comparison to the higher noise levels seen when using a global reference frame. Furthermore, we investigate the combined contributions of environmental loading and HOI effects on the CMONOC stations. The largest loading effects on the vertical displacement are found in the mid- to high-latitude areas. The weighted root mean square differences between the corrected and original weekly GPS height time series of the loading model indicate that the mass loading adequately reduced the scatter on the CMONOC height time series, whereas the results in the global reference frame showed better agreements between the GPS coordinate time series and the environmental loading. When combining the effects of environmental loading and HOI corrections, the results with the HOI corrections reduced the scatter on the observed GPS height coordinates better than the height when estimated without HOI corrections, and the combined solutions in the regional reference frame indicate more preferred improvements. Therefore, regional reference frames are recommended to investigate the HOI effects on regional networks.

  10. Atmospheric scattering corrections to solar radiometry

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.

  11. A library least-squares approach for scatter correction in gamma-ray tomography

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  12. Relativistic corrections to the multiple scattering effect on the Sunyaev-Zel'dovich effect in the isotropic approximation

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Kawana, Youhei; Nozawa, Satoshi; Kohyama, Yasuharu

    2001-10-01

    We extend the formalism for the calculation of the relativistic corrections to the Sunyaev-Zel'dovich effect for clusters of galaxies and include the multiple scattering effects in the isotropic approximation. We present the results of the calculations by the Fokker-Planck expansion method as well as by the direct numerical integration of the collision term of the Boltzmann equation. The multiple scattering contribution is found to be very small compared with the single scattering contribution. For high-temperature galaxy clusters of kBTe~15keV, the ratio of both the contributions is -0.2 per cent in the Wien region. In the Rayleigh-Jeans region the ratio is -0.03 per cent. Therefore the multiple scattering contribution is safely neglected for the observed galaxy clusters.

  13. Slot scanning versus antiscatter grid in digital mammography: comparison of low-contrast performance using contrast-detail measurement

    NASA Astrophysics Data System (ADS)

    Lai, Chao-Jen; Shaw, Chris C.; Geiser, William; Kappadath, Srinivas C.; Liu, Xinming; Wang, TianPeng; Tu, Shu-Ju; Altunbas, Mustafa C.

    2004-05-01

    Slot scanning imaging techniques allow for effective scatter rejection without attenuating primary x-rays. The use of these techniques should generate better image quality for the same mean glandular dose (MGD) or a similar image quality for a lower MGD as compared to imaging techniques using an anti-scatter grid. In this study, we compared a slot scanning digital mammography system (SenoScan, Fisher Imaging Systems, Denver, CO) to a full-field digital mammography (FFDM) system used in conjunction with a 5:1 anti-scatter grid (SenoGraphe 2000D, General Electric Medical Systems, Milwaukee, WI). Images of a contrast-detail phantom (University Hospital Nijmegen, The Netherlands) were reviewed to measure the contrast-detail curves for both systems. These curves were measured at 100%, 71%, 49% and 33% of the reference mean glandular dose (MGD), as determined by photo-timing, for the Fisher system and 100% for the GE system. Soft-copy reading was performed on review workstations provided by the manufacturers. The correct observation ratios (CORs) were also computed and used to compare the performance of the two systems. The results showed that, based on the contrast-detail curves, the performance of the Fisher images, acquired at 100% and 71% of the reference MGD, was comparable to the GE images at 100% of the reference MGD. The CORs for Fisher images were 0.463 and 0.444 at 100% and 71% of the reference MGD, respectively, compared to 0.453 for the GE images at 100% of the reference MGD.

  14. A Scattered Light Correction to Color Images Taken of Europa by the Galileo Spacecraft: Initial Results

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Valenti, M.

    2009-12-01

    Jupiter's moon Europa likely possesses an ocean of liquid water beneath its icy surface, but estimates of the thickness of the surface ice shell vary from a few kilometers to tens of kilometers. Color images of Europa reveal the existence of a reddish, non-ice component associated with a variety of geological features. The composition and origin of this material is uncertain, as is its relationship to Europa's various landforms. Published analyses of Galileo Near Infrared Mapping Spectrometer (NIMS) observations indicate the presence of highly hydrated sulfate compounds. This non-ice material may also bear biosignatures or other signs of biotic material. Additional spectral information from the Galileo Solid State Imager (SSI) could further elucidate the nature of the surface deposits, particularly when combined with information from the NIMS. However, little effort has been focused on this approach because proper calibration of the color image data is challenging, requiring both skill and patience to process the data and incorporate the appropriate scattered light correction. We are currently working to properly calibrate the color SSI data. The most important and most difficult issue to address in the analysis of multispectral SSI data entails using thorough calibrations and a correction for scattered light. Early in the Galileo mission, studies of the Galileo SSI data for the moon revealed discrepancies of up to 10% in relative reflectance between images containing scattered light and images corrected for scattered light. Scattered light adds a wavelength-dependent low-intensity brightness factor to pixels across an image. For example, a large bright geological feature located just outside the field of view of an image will scatter extra light onto neighboring pixels within the field of view. Scattered light can be seen as a dim halo surrounding an image that includes a bright limb, and can also come from light scattered inside the camera by dirt, edges, and the interfaces of lenses. Because of the wavelength dependence of this effect, a scattered light correction must be performed on any SSI multispectral dataset before quantitative spectral analysis can be done. The process involves using a point-spread function for each filter that helps determine the amount of scattered light expected for a given pixel based on its location and the model attenuation factor for that pixel. To remove scattered light for a particular image taken through a particular filter, the Fourier transform of the attenuation function, which is the point spread function for that filter, is convolved with the Fourier transform of the image at the same wavelength. The result is then filtered for noise in the frequency domain, and then transformed back to the spatial domain. This results in a version of the original image that would have been taken without the scattered light contribution. We will report on our initial results from this calibration.

  15. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  16. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  17. FDTD scattered field formulation for scatterers in stratified dispersive media.

    PubMed

    Olkkonen, Juuso

    2010-03-01

    We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.

  18. Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1998-10-01

    The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.

  19. On the far-field computation of acoustic radiation forces.

    PubMed

    Martin, P A

    2017-10-01

    It is known that the steady acoustic radiation force on a scatterer due to incident time-harmonic waves can be calculated by evaluating certain integrals of velocity potentials over a sphere surrounding the scatterer. The goal is to evaluate these integrals using far-field approximations and appropriate limits. Previous derivations are corrected, clarified, and generalized. Similar corrections are made to textbook derivations of optical theorems.

  20. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  1. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  2. Bistatic scattering from a cone frustum

    NASA Technical Reports Server (NTRS)

    Ebihara, W.; Marhefka, R. J.

    1986-01-01

    The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.

  3. Single-Inclusive Jet Production In Electron-Nucleon Collisions Through Next-To-Next-To-Leading Order In Perturbative QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abelof, Gabriel; Boughezal, Radja; Liu, Xiaohui

    2016-10-17

    We compute the Oσ 2σ 2 s perturbative corrections to inclusive jet production in electron-nucleon collisions. This process is of particular interest to the physics program of a future Electron Ion Collider (EIC). We include all relevant partonic processes, including deep-inelastic scattering contributions, photon-initiated corrections, and parton-parton scattering terms that first appear at this order. Upon integration over the final-state hadronic phase space we validate our results for the deep-inelastic corrections against the known next-to-next-to-leading order (NNLO) structure functions. Our calculation uses the N-jettiness subtraction scheme for performing higher-order computations, and allows for a completely differential description of the deep-inelasticmore » scattering process. We describe the application of this method to inclusive jet production in detail, and present phenomenological results for the proposed EIC. The NNLO corrections have a non-trivial dependence on the jet kinematics and arise from an intricate interplay between all contributing partonic channels.« less

  4. [Atmospheric correction of HJ-1 CCD data for water imagery based on dark object model].

    PubMed

    Zhou, Li-Guo; Ma, Wei-Chun; Gu, Wan-Hua; Huai, Hong-Yan

    2011-08-01

    The CCD multi-band data of HJ-1A has great potential in inland water quality monitoring, but the precision of atmospheric correction is a premise and necessary procedure for its application. In this paper, a method based on dark pixel for water-leaving radiance retrieving is proposed. Beside the Rayleigh scattering, the aerosol scattering is important to atmospheric correction, the water quality of inland lakes always are case II water and the value of water leaving radiance is not zero. So the synchronous MODIS shortwave infrared data was used to obtain the aerosol parameters, and in virtue of the characteristic that aerosol scattering is relative stabilized in 560 nm, the water-leaving radiance for each visible and near infrared band were retrieved and normalized, accordingly the remotely sensed reflectance of water was computed. The results show that the atmospheric correction method based on the imagery itself is more effective for the retrieval of water parameters for HJ-1A CCD data.

  5. Correction of the spectral calibration of the Joint European Torus core light detecting and ranging Thomson scattering diagnostic using ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawke, J.; Scannell, R.; Maslov, M.

    2013-10-15

    This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less

  6. An alternative estimation of the RF-enhanced plasma temperature during SPEAR artificial heating experiments: Early results

    NASA Astrophysics Data System (ADS)

    Vickers, H.; Baddeley, L.

    2011-11-01

    RF heating of the F region plasma at high latitudes has long been known to produce electron temperature increases that can vary from tens to hundreds of percent above the background, unperturbed level. In contrast, artificial ionospheric modification experiments conducted using the Space Plasma Exploration by Active Radar (SPEAR) heating facility on Svalbard have often failed to produce obvious enhancements in the electron temperatures when measured using the European Incoherent Scatter Svalbard radar (ESR), colocated with the heater. Contamination of the ESR ion line spectra by the zero-frequency purely growing mode (PGM) feature is known to persist at varying amplitudes throughout SPEAR heating, and such spectral features can lead to significant temperature underestimations when the incoherent scatter spectra are analyzed using conventional methods. In this study, we present the first results of applying a recently developed technique to correct the PGM-contaminated spectra to SPEAR-enhanced ESR spectra and derive an alternative estimate of the SPEAR-heated electron temperature. We discuss how the effectiveness of the spectrum corrections can be affected by the data variance, estimated over the integration period. The subsequent electron temperatures, inferred from corrected spectra, range from a few tens to a few hundred Kelvin above the average background temperature. These temperatures are found to be in reasonable agreement with the theoretical “enhanced” temperature, calculated for the peak of the stationary temperature perturbation profile, when realistic absorption effects are accounted for.

  7. Methods for assessing forward and backward light scatter in patients with cataract.

    PubMed

    Crnej, Alja; Hirnschall, Nino; Petsoglou, Con; Findl, Oliver

    2017-08-01

    To compare objective methods for assessing backward and forward light scatter and psychophysical tests in patients with cataracts. Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom. Prospective case series. This study included patients scheduled for cataract surgery. Lens opacities were grouped into predominantly nuclear sclerotic, cortical, posterior subcapsular, and mixed cataracts. Backward light scatter was assessed using a rotating Scheimpflug imaging technique (Pentacam HR), forward light scatter using a straylight meter (C-Quant), and straylight using the double-pass method (Optical Quality Analysis System, point-spread function [PSF] meter). The results were correlated with visual acuity under photopic conditions as well as photopic and mesopic contrast sensitivity. The study comprised 56 eyes of 56 patients. The mean age of the 23 men and 33 women was 71 years (range 48 to 84 years). Two patients were excluded. Of the remaining, 15 patients had predominantly nuclear sclerotic cataracts, 13 had cortical cataracts, 11 had posterior subcapsular cataracts, and 15 had mixed cataracts. Correlations between devices were low. The highest correlation was between PSF meter measurements and Scheimpflug measurements (r = 0.32). The best correlation between corrected distance visual acuity was with the PSF meter (r = 0.45). Forward and backward light-scatter measurements cannot be used interchangeably. Scatter as an aspect of quality of vision was independent of acuity. Measuring forward light scatter with the straylight meter can be a useful additional tool in preoperative decision-making. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. The Impact of Microstructure on an Accurate Snow Scattering Parameterization at Microwave Wavelengths

    NASA Astrophysics Data System (ADS)

    Honeyager, Ryan

    High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then examined. The loss of interior structure is found to have a negligible impact on scattering cross sections, and backscatter is lowered by approximately five percent. This establishes that detailed knowledge of interior structure is not necessary when modeling scattering behavior, and it also provides support for using an effective medium approximation to describe the interiors of snow aggregates. The Voronoi diagram-based technique enables the almost trivial determination of the effective density of this medium. A bounding neighbor algorithm is then used to establish a greatly improved approximation of scattering by equivalent spheroids. This algorithm is then used to posit a Voronoi diagram-based definition of effective density approach, which is used in concert with the T-matrix method to determine single-scattering cross sections. The resulting backscatters are found to reasonably match those of the DDA over frequencies from 10.65 to 183.31 GHz and particle sizes from a few hundred micrometers to nine millimeters in length. Integrated error in backscatter versus DDA is found to be within 25% at 94 GHz. Errors in scattering cross-sections and asymmetry parameters are likewise small. The observed cross-sectional errors are much smaller than the differences observed among different particle models. This represents a significant improvement over established techniques, and it demonstrates that the radiative properties of dense aggregate snowflakes may be adequately represented by equal-mass homogeneous spheroids. The present results can be used to supplement retrieval algorithms used by CloudSat, EarthCARE, Galileo, GPM and SWACR radars. The ability to predict the full range of scattering properties is potentially also useful for other particle regimes where a compact particle approximation is applicable.

  9. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  10. Characterization of aerosol scattering and spectral absorption by unique methods: a polar/imaging nephelometer and spectral reflectance measurements of aerosol samples collected on filters

    NASA Astrophysics Data System (ADS)

    Dolgos, Gergely; Martins, J. Vanderlei; Remer, Lorraine A.; Correia, Alexandre L.; Tabacniks, Manfredo; Lima, Adriana R.

    2010-02-01

    Characterization of aerosol scattering and absorption properties is essential to accurate radiative transfer calculations in the atmosphere. Applications of this work include remote sensing of aerosols, corrections for aerosol distortions in satellite imagery of the surface, global climate models, and atmospheric beam propagation. Here we demonstrate successful instrument development at the Laboratory for Aerosols, Clouds and Optics at UMBC that better characterizes aerosol scattering phase matrix using an imaging polar nephelometer (LACO-I-Neph) and enables measurement of spectral aerosol absorption from 200 nm to 2500 nm. The LACO-I-Neph measures the scattering phase function from 1.5° to 178.5° scattering angle with sufficient sensitivity to match theoretical expectations of Rayleigh scattering of various gases. Previous measurements either lack a sufficiently wide range of measured scattering angles or their sensitivity is too low and therefore the required sample amount is prohibitively high for in situ measurements. The LACO-I-Neph also returns expected characterization of the linear polarization signal of Rayleigh scattering. Previous work demonstrated the ability of measuring spectral absorption of aerosol particles using a reflectance technique characterization of aerosol samples collected on Nuclepore filters. This first generation methodology yielded absorption measurements from 350 nm to 2500 nm. Here we demonstrate the possibility of extending this wavelength range into the deep UV, to 200 nm. This extended UV region holds much promise in identifying and characterizing aerosol types and species. The second generation, deep UV, procedure requires careful choice of filter substrates. Here the choice of substrates is explored and preliminary results are provided.

  11. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  12. Measurements of Nascent Soot Using a Cavity Attenauted Phase Shift (CAPS)-based Single Scattering Albedo Monitor

    NASA Astrophysics Data System (ADS)

    Freedman, A.; Onasch, T. B.; Renbaum-Wollf, L.; Lambe, A. T.; Davidovits, P.; Kebabian, P. L.

    2015-12-01

    Accurate, as compared to precise, measurement of aerosol absorption has always posed a significant problem for the particle radiative properties community. Filter-based instruments do not actually measure absorption but rather light transmission through the filter; absorption must be derived from this data using multiple corrections. The potential for matrix-induced effects is also great for organic-laden aerosols. The introduction of true in situ measurement instruments using photoacoustic or photothermal interferometric techniques represents a significant advance in the state-of-the-art. However, measurement artifacts caused by changes in humidity still represent a significant hurdle as does the lack of a good calibration standard at most measurement wavelengths. And, in the absence of any particle-based absorption standard, there is no way to demonstrate any real level of accuracy. We, along with others, have proposed that under the circumstance of low single scattering albedo (SSA), absorption is best determined by difference using measurement of total extinction and scattering. We discuss a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement of non-absorbing particles. For small particles and low SSA, absorption can be measured with an accuracy of 6-8% at absorption levels as low as a few Mm-1. We present new results of the measurement of the mass absorption coefficient (MAC) of soot generated by an inverted methane diffusion flame at 630 nm. A value of 6.60 ±0.2 m2 g-1 was determined where the uncertainty refers to the precision of the measurement. The overall accuracy of the measurement, traceable to the properties of polystyrene latex particles, is estimated to be better than ±10%.

  13. Determination of concrete cover thickness in a reinforced concrete pillar by observation of the scattered electromagnetic field

    NASA Astrophysics Data System (ADS)

    Di Gregorio, Pietro Paolo; Frezza, Fabrizio; Mangini, Fabio; Pajewski, Lara

    2017-04-01

    The electromagnetic scattered field by a reinforced concrete structure is calculated by means of frequency-domain numerical simulations and by making use of the scattered-field formulation. The concrete pillar, used as supporting architectural element, is modelled as a parallelepiped shell made of concrete material inside which are present steel bars. In order to make the model simpler, the steel bars are supposed running parallel to the air-pillar interface. To excite the model, a linearly-polarized plane wave impinging normally with respect to the pillars surface, is adopted. We consider two different polarizations in order to determine the most useful in terms of scattered-field sensitivity. Moreover, a preliminary frequency sweep allows us to choose the most suitable operating frequency depending on the dimensions of the pillar cross-section, the steel bars cross-section and the concrete cover. All the three components of the scattered field are monitored along a line just above the interface air-pillar. The electromagnetic properties of the materials employed in this study are present in the literature and, since a frequency-domain technique is adopted, no further approximation is needed. The results obtained for different values of the concrete cover are compared, with the goal of determining the scattered field dependence on the concrete cover thickness. Considering different concrete cover thicknesses, we want to provide an electromagnetic method to obtain this useful parameter by observation of the scattered electromagnetic field. One of the practical applications of this study in the field of Civil Engineering may be the use of ground penetrating radar (GPR) techniques to monitor the thickness of the concrete that separates the metal bars embedded in the pillar from the outer surface. A correct distance is useful because the concrete cover serves as a protection against external agents avoiding corrosion of the bars that might prejudice the reinforced concrete; it ensures also an optimal transmission and distribution of the adhesion forces in the pillar. Acknowledgement This work is a contribution to COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu).

  14. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  15. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    PubMed

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  16. Toward the understanding of hydration phenomena in aqueous electrolytes from the interplay of theory, molecular simulation, and experiment

    DOE PAGES

    Chialvo, Ariel A.; Vlcek, Lukas

    2015-05-22

    We confront the microstructural analysis of aqueous electrolytes and present a detailed account of the fundamentals underlying the neutron scattering with isotopic substitution (NDIS) approach for the experimental determination of ion coordination numbers in systems involving both halides anions and oxyanions. We place particular emphasis on the frequently overlooked ion-pairing phenomenon, identify its microstructural signature in the neutron-weighted distribution functions, and suggest novel techniques to deal with either the estimation of the ion-pairing magnitude or the correction of its effects on the experimentally measured coordination numbers. We illustrate the underlying ideas by applying these new developments to the interpretation ofmore » four NDIS test-cases via molecular simulation, as convenient dry runs for the actual scattering experiments, for representative aqueous electrolyte solutions at ambient conditions involving metal halides and nitrates.« less

  17. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    NASA Astrophysics Data System (ADS)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  18. Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL

    DOE PAGES

    He, Lilin; Do, Changwoo; Qian, Shuo; ...

    2014-11-25

    Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less

  19. Semiclassical Virasoro blocks from AdS 3 gravity

    DOE PAGES

    Hijano, Eliot; Kraus, Per; Perlmutter, Eric; ...

    2015-12-14

    We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS 3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS 3; thismore » approach highlights the chiral nature of Virasoro blocks. Finally, these techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.« less

  20. Brillouin micro-spectroscopy through aberrations via sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Edrei, Eitan; Scarcelli, Giuliano

    2018-04-01

    Brillouin spectroscopy is a powerful optical technique for non-contact viscoelastic characterizations which has recently found applications in three-dimensional mapping of biological samples. Brillouin spectroscopy performances are rapidly degraded by optical aberrations and have therefore been limited to homogenous transparent samples. In this work, we developed an adaptive optics (AO) configuration designed for Brillouin scattering spectroscopy to engineer the incident wavefront and correct for aberrations. Our configuration does not require direct wavefront sensing and the injection of a "guide-star"; hence, it can be implemented without the need for sample pre-treatment. We used our AO-Brillouin spectrometer in aberrated phantoms and biological samples and obtained improved precision and resolution of Brillouin spectral analysis; we demonstrated 2.5-fold enhancement in Brillouin signal strength and 1.4-fold improvement in axial resolution because of the correction of optical aberrations.

  1. Transient thermal and nonthermal electron and phonon relaxation after short-pulsed laser heating of metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, Ashutosh; Hopkins, Patrick E., E-mail: phopkins@virginia.edu

    2015-12-07

    Several dynamic thermal and nonthermal scattering processes affect ultrafast heat transfer in metals after short-pulsed laser heating. Even with decades of measurements of electron-phonon relaxation, the role of thermal vs. nonthermal electron and phonon scattering on overall electron energy transfer to the phonons remains unclear. In this work, we derive an analytical expression for the electron-phonon coupling factor in a metal that includes contributions from equilibrium and nonequilibrium distributions of electrons. While the contribution from the nonthermal electrons to electron-phonon coupling is non-negligible, the increase in the electron relaxation rates with increasing laser fluence measured by thermoreflectance techniques cannot bemore » accounted for by only considering electron-phonon relaxations. We conclude that electron-electron scattering along with electron-phonon scattering have to be considered simultaneously to correctly predict the transient nature of electron relaxation during and after short-pulsed heating of metals at elevated electron temperatures. Furthermore, for high electron temperature perturbations achieved at high absorbed laser fluences, we show good agreement between our model, which accounts for d-band excitations, and previous experimental data. Our model can be extended to other free electron metals with the knowledge of the density of states of electrons in the metals and considering electronic excitations from non-Fermi surface states.« less

  2. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  3. dAcquisition setting optimization and quantitative imaging for 124I studies with the Inveon microPET-CT system.

    PubMed

    Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel

    2012-02-13

    Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.

  4. Improved determination of particulate absorption from combined filter pad and PSICAM measurements.

    PubMed

    Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David

    2016-10-31

    Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.

  5. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  6. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scolnic, D.; Kessler, R., E-mail: dscolnic@kicp.uchicago.edu, E-mail: kessler@kicp.uchicago.edu

    Simulations of Type Ia supernovae (SNe Ia) surveys are a critical tool for correcting biases in the analysis of SNe Ia to infer cosmological parameters. Large-scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models, and selection effects. In this Letter, we improve simulations with a robust technique to evaluate the underlying populations of SN Ia color and stretch that correlate with luminosity. In typical analyses, the standardized SN Ia brightness is determined from linear “Tripp” relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubblemore » residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a 10 σ bias (up to 0.3 mag) in Hubble residuals versus color and 5 σ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirmed SN Ia from PS1, SNLS, SDSS, and several low- z surveys. After we determine the underlying color and stretch distributions, we use simulations to predict and correct the biases in the data. We show that removing these biases has a small impact on the low- z sample, but reduces the intrinsic scatter σ {sub int} from 0.101 to 0.083 in the combined PS1, SNLS, and SDSS sample. Past estimates of the underlying populations were too broad, leading to a small bias in the equation of state of dark energy w of Δ w = 0.005.« less

  8. Hadron diffractive production at ultrahigh energies and shadow effects

    NASA Astrophysics Data System (ADS)

    Anisovich, V. V.; Matveev, M. A.; Nikonov, V. A.

    2016-10-01

    Shadow effects at collisions of hadrons with light nuclei at high energies were subject of scientific interest of V.N. Gribov, first, we mean his study of the hadron-deuteron scattering, see Sov. Phys. JETP 29, 483 (1969) [Zh. Eksp. Teor. Fiz. 56, 892 (1969)] and discovery of the reinforcement of shadowing due to inelastic diffractive rescatterings. It turns out that the similar effect exists on hadron level though at ultrahigh energies. Diffractive production is considered in the ultrahigh energy region where pomeron exchange amplitudes are transformed into black disk ones due to rescattering corrections. The corresponding corrections in hadron reactions h1 + h3 → h1 + h2 + h3 with small momenta transferred (q1→12 ˜ m2/ln2s, q3→32 ˜ m2/ln2s) are calculated in terms of the K-matrix technique modified for ultrahigh energies. Small values of the momenta transferred are crucial for introducing equations for amplitudes. The three-body equation for hadron diffractive production reaction h1 + h3 → h1 + h2 + h3 is written and solved precisely in the eikonal approach. In the black disk regime final state scattering processes do not change the shapes of amplitudes principally but dump amplitudes by a factor ˜ 1 4; initial state rescatterings result in additional factor ˜ 1 2. In the resonant disk regime initial and final state scatterings damp strongly the production amplitude that corresponds to σinel/σtot → 0 at s →∞ in this mode.

  9. Hadron Diffractive Production at Ultrahigh Energies and Shadow Effects

    NASA Astrophysics Data System (ADS)

    Anisovich, V. V.; Matveev, M. A.; Nikonov, V. A.

    Shadow effects at collisions of hadrons with light nuclei at high energies were subject of scientific interest of V.N. Gribov, first, we mean his study of the hadron-deuteron scattering, see Sov. Phys. JETP 29, 483 (1969) [Zh. Eksp. Teor. Fiz. 56, 892 (1969)] and discovery of the reinforcement of shadowing due to inelastic diffractive rescatterings. It turns out that the similar effect exists on hadron level though at ultrahigh energies... Diffractive production is considered in the ultrahigh energy region where pomeron exchange amplitudes are transformed into black disk ones due to rescattering corrections. The corresponding corrections in hadron reactions h1 + h3 → h1 + h2 + h3 with small momenta transferred (q^2_{1 to 1} m^2/ ln^2 s, q^2_{3 to 3} m^2/ ln^2 s) are calculated in terms of the K-matrix technique modified for ultrahigh energies. Small values of the momenta transferred are crucial for introducing equations for amplitudes. The three-body equation for hadron diffractive production reaction h1 + h3 → h1 + h2 + h3 is written and solved precisely in the eikonal approach. In the black disk regime final state scattering processes do not change the shapes of amplitudes principally but dump amplitudes by a factor 1/4 initial state rescatterings result in additional factor 1/2. In the resonant disk regime initial and final state scatterings damp strongly the production amplitude that corresponds to σ_{inel}/σ_{tot} to 0 at √{s}to ∞ in this mode.

  10. Scattering of Femtosecond Laser Pulses on the Negative Hydrogen Ion

    NASA Astrophysics Data System (ADS)

    Astapenko, V. A.; Moroz, N. N.

    2018-05-01

    Elastic scattering of ultrashort laser pulses (USLPs) on the negative hydrogen ion is considered. Results of calculations of the USLP scattering probability are presented and analyzed for pulses of two types: the corrected Gaussian pulse and wavelet pulse without carrier frequency depending on the problem parameters.

  11. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  12. Charm-Quark Production in Deep-Inelastic Neutrino Scattering at Next-to-Next-to-Leading Order in QCD.

    PubMed

    Berger, Edmond L; Gao, Jun; Li, Chong Sheng; Liu, Ze Long; Zhu, Hua Xing

    2016-05-27

    We present a fully differential next-to-next-to-leading order calculation of charm-quark production in charged-current deep-inelastic scattering, with full charm-quark mass dependence. The next-to-next-to-leading order corrections in perturbative quantum chromodynamics are found to be comparable in size to the next-to-leading order corrections in certain kinematic regions. We compare our predictions with data on dimuon production in (anti)neutrino scattering from a heavy nucleus. Our results can be used to improve the extraction of the parton distribution function of a strange quark in the nucleon.

  13. Optimizing the models for rapid determination of chlorogenic acid, scopoletin and rutin in plant samples by near-infrared diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang

    2014-07-01

    Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.

  14. SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, B; Liu, S; Zhang, T

    2016-06-15

    Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less

  15. Analysis of dense-medium light scattering with applications to corneal tissue: experiments and Monte Carlo simulations.

    PubMed

    Kim, K B; Shanyfelt, L M; Hahn, D W

    2006-01-01

    Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.

  16. Correction of nonuniform attenuation and image fusion in SPECT imaging by means of separate X-ray CT.

    PubMed

    Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji

    2002-06-01

    Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.

  17. Advanced Statistics for Exotic Animal Practitioners.

    PubMed

    Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G

    2017-09-01

    Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  19. Alterations to the relativistic Love-Franey model and their application to inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeile, J.R.

    The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less

  20. How to improve x-ray scattering techniques to quantify bone mineral density using spectroscopy

    PubMed Central

    Krmar, M.; Ganezer, K.

    2012-01-01

    Purpose: The purpose of this study was to develop a new diagnostic technique for measuring bone mineral density (BMD) for the assessment of osteoporosis, which improves upon the coherent to Compton scattering ratio (CCSR) method, which was first developed in the 1980s. To help the authors achieve these goals, they have identified and studied two new indices for CCSR, the forward scattered to backward scattered (FS-BS) and the forward scattered to transmitted (FS-T) ratios. They believe that, at small angles, these two parameters can offer a practical in vivo determination of BMD that can be used to overcome the limitations of past CCSR systems, including high radiation dosages, costs, and examination durations. Methods: In previous CCSR studies, a high-activity radioactive source with a long half-live (usually 241Am) and an expensive and bulky cryogenic HPGe detector were applied to both in vivo and in vitro measurements. To make this technique more suitable for clinical applications, the possibility of using a standard diagnostic x-ray tube generating a continuous spectrum was investigated in this paper. Scattered radiation from trabecular bone-simulating phantoms containing various mineral densities that span the normal range of in vivo BMD was collected in this study using relatively inexpensive noncryogenic CdTe or NaI detectors. Results: The initial results demonstrate that a modified version of CCSR can be successfully applied to trabecular bone assessment using a diagnostic x-ray tube with a continuous spectrum in two variations, the FS-BS and the FS-T ratio. When FS-BS is measured, intensity spectra in the forward and backward directions must be collected while FS-T requires only the integral intensity of the scattered and transmitted (T) spectra in the energy region above 40 keV. For both of these methods, forward scattering angles less than or equal to 15° and backward scattering angles greater than or equal to (165°= 180° − 15°) are needed. Conclusions: The authors determined that FS-T is more sensitive to changes in BMD than transmission or absorption alone and that the FS-BS method can yield an absolute measurement of the mean atomic number of the scattering medium, after a correction for path-dependent attenuation. Since this study determined that the FS-T ratio is independent of the incident energy over a broad energy region, it will be possible to apply FS-T to bone densitometry using inexpensive integral photon detectors. The authors believe that, by replacing the radionuclide source with an x-ray tube and the cryogenically cooled HPGe detector with a single solid state CdTe, NaI, or silicon detector or an annular array of detectors, as suggested in this study, the past difficulties of CCSR concerning high radiation exposure, costs, and durations as well as lack of convenience can be overcome and that CCSR could eventually become popular in clinical settings. PMID:22482605

  1. Electroweak radiative corrections to neutrino scattering at NuTeV

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Baur, Ulrich; Wackeroth, Doreen

    2007-04-01

    The W boson mass extracted by the NuTeV collaboration from the ratios of neutral and charged-current neutrino and anti-neutrino cross sections differs from direct measurements performed at LEP2 and the Fermilab Tevatron by about 3 σ. Several possible sources for the observed difference have been discussed in the literature, including new physics beyond the Standard Model (SM). However, in order to be able to pin down the cause of this discrepancy and to interpret this result as a deviation to the SM, it is important to include the complete electroweak one-loop corrections when extracting the W boson mass from neutrino scattering cross sections. We will present results of a Monte Carlo program for νN (νN) scattering including the complete electroweak O(α) corrections, which will be used to study the effects of these corrections on the extracted values for the electroweak parameters. We will briefly introduce some of the newly developed computational tools for generating Feynman diagrams and corresponding analytic expressions for one-loop matrix elements.

  2. Environmental and Genetic Factors Explain Differences in Intraocular Scattering.

    PubMed

    Benito, Antonio; Hervella, Lucía; Tabernero, Juan; Pennos, Alexandros; Ginis, Harilaos; Sánchez-Romera, Juan F; Ordoñana, Juan R; Ruiz-Sánchez, Marcos; Marín, José M; Artal, Pablo

    2016-01-01

    To study the relative impact of genetic and environmental factors on the variability of intraocular scattering within a classical twin study. A total of 64 twin pairs, 32 monozygotic (MZ) (mean age: 54.9 ± 6.3 years) and 32 dizygotic (DZ) (mean age: 56.4 ± 7.0 years), were measured after a complete ophthalmologic exam had been performed to exclude all ocular pathologies that increase intraocular scatter as cataracts. Intraocular scattering was evaluated by using two different techniques based on a straylight parameter log(S) estimation: a compact optical instrument based in the principle of optical integration and a psychophysical measurement. Intraclass correlation coefficients (ICC) were used as descriptive statistics of twin resemblance, and genetic models were fitted to estimate heritability. No statistically significant difference was found for MZ and DZ groups for age (P = 0.203), best-corrected visual acuity (P = 0.626), cataract gradation (P = 0.701), sex (P = 0.941), optical log(S) (P = 0.386), or psychophysical log(S) (P = 0.568), with only a minor difference in equivalent sphere (P = 0.008). Intraclass correlation coefficients between siblings were similar for scatter parameters: 0.676 in MZ and 0.471 in DZ twins for optical log(S); 0.533 in MZ twins and 0.475 in DZ twins for psychophysical log(S). For equivalent sphere, ICCs were 0.767 in MZ and 0.228 in DZ twins. Conservative estimates of heritability for the measured scattering parameters were 0.39 and 0.20, respectively. Correlations of intraocular scatter (straylight) parameters in the groups of identical and nonidentical twins were similar. Heritability estimates were of limited magnitude, suggesting that genetic and environmental factors determine the variance of ocular straylight in healthy middle-aged adults.

  3. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  4. Lidar inelastic multiple-scattering parameters of cirrus particle ensembles determined with geometrical-optics crystal phase functions.

    PubMed

    Reichardt, J; Hess, M; Macke, A

    2000-04-20

    Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.

  5. Dispersive approach to two-photon exchange in elastic electron-proton scattering

    DOE PAGES

    Blunden, P. G.; Melnitchouk, W.

    2017-06-14

    We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.

  6. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  7. Qualitative and quantitative processing of side-scan sonar data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwan, F.S.; Anderson, A.L.; Hilde, T.W.C.

    1990-06-01

    Modern side-scan sonar systems allow vast areas of seafloor to be rapidly imaged and quantitatively mapped in detail. The application of remote sensing image processing techniques can be used to correct for various distortions inherent in raw sonography. Corrections are possible for water column, slant-range, aspect ratio, speckle and striping noise, multiple returns, power drop-off, and for georeferencing. The final products reveal seafloor features and patterns that are geometrically correct, georeferenced, and have improved signal/noise ratio. These products can be merged with other georeferenced data bases for further database management and information extraction. In order to compare data collected bymore » different systems from a common area and to ground truth measurements and geoacoustic models, quantitative correction must be made for calibrated sonar system and bathymetry effects. Such data inversion must account for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area, and grazing angle effects. Seafloor classification can then be performed on the calculated back-scattering strength using Lambert's Law and regression analysis. Examples are given using both approaches: image analysis and inversion of data based on the sonar equation.« less

  8. Bias Field Inconsistency Correction of Motion-Scattered Multislice MRI for Improved 3D Image Reconstruction

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561

  9. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    PubMed

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  10. Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidars.

    PubMed

    She, C Y

    2001-09-20

    It is well known that scattering lidars, i.e., Mie, aerosol-wind, Rayleigh, high-spectral-resolution, molecular-wind, rotational Raman, and vibrational Raman lidars, are workhorses for probing atmospheric properties, including the backscatter ratio, aerosol extinction coefficient, temperature, pressure, density, and winds. The spectral structure of molecular scattering (strength and bandwidth) and its constituent spectra associated with Rayleigh and vibrational Raman scattering are reviewed. Revisiting the correct name by distinguishing Cabannes scattering from Rayleigh scattering, and sharpening the definition of each scattering component in the Rayleigh scattering spectrum, the review allows a systematic, logical, and useful comparison in strength and bandwidth between each scattering component and in receiver bandwidths (for both nighttime and daytime operation) between the various scattering lidars for atmospheric sensing.

  11. Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.

    2016-01-01

    Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.

  12. Atmospheric correction of AVIRIS data in ocean waters

    NASA Technical Reports Server (NTRS)

    Terrie, Gregory; Arnone, Robert

    1992-01-01

    Hyperspectral data offers unique capabilities for characterizing the ocean environment. The spectral characterization of the composition of ocean waters can be organized into biological and terrigenous components. Biological photosynthetic pigments in ocean waters have unique spectral ocean color signatures which can be associated with different biological species. Additionally, suspended sediment has different scattering coefficients which result in ocean color signatures. Measuring the spatial distributions of these components in the maritime environments provides important tools for understanding and monitoring the ocean environment. These tools have significant applications in pollution, carbon cycle, current and water mass detection, location of fronts and eddies, sewage discharge and fate etc. Ocean color was used from satellite for describing the spatial variability of chlorophyll, water clarity (K(sub 490)), suspended sediment concentration, currents etc. Additionally, with improved atmospheric correction methods, ocean color results produced global products of spectral water leaving radiance (L(sub W)). Ocean color results clearly indicated strong applications for characterizing the spatial and temporal variability of bio-optical oceanography. These studies were largely the results of advanced atmospheric correction techniques applied to multispectral imagery. The atmosphere contributes approximately 80 percent - 90 percent of the satellite received radiance in the blue-green portion of the spectrum. In deep ocean waters, maximum transmission of visible radiance is achieved at 490nm. Conversely, nearly all of the light is absorbed by the water at wavelengths greater than about 650nm and thus appears black. These spectral ocean properties are exploited by algorithms developed for the atmospheric correction used in satellite ocean color processing. The objective was to apply atmospheric correction techniques that were used for procesing satellite Coastal Zone Color Scanner (CZCS) data to AVIRIS data. Quantitative measures of L(sub W) from AVIRIS are compared with ship ground truth data and input into bio-optical models.

  13. A curvature-corrected Kirchhoff formulation for radar sea-return from the near vertical

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.

    1974-01-01

    A new theoretical treatment of the problem of electromagnetic wave scattering from a randomly rough surface is given. A high frequency correction to the Kirchhoff approximation is derived from a field integral equation for a perfectly conducting surface. The correction, which accounts for the effect of local surface curvature, is seen to be identical with an asymptotic form found by Fock (1945) for diffraction by a paraboloid. The corrected boundary values are substituted into the far field Stratton-Chu integral, and average backscattered powers are computed assuming the scattering surface is a homogeneous Gaussian process. Preliminary calculations for K(-4) ocean wave spectrum indicate a resonable modelling of polarization effects near the vertical, theta 45 deg. Correspondence with the results of small perturbation theory is shown.

  14. Spectroscopic detection of chemotherapeutics and antioxidants

    NASA Astrophysics Data System (ADS)

    Latka, Ines; Grüner, Roman; Matthäus, Christian; Dietzek, Benjamin; Werncke, W.; Lademann, Jürgen; Popp, Jürgen

    2012-06-01

    The hand-foot-syndrome presents a severe dermal side-effect of chemotherapeutic cancer treatment. The cause of this side-effect is the elimination of systemically administered chemotherapeutics with the sweat. Transported to the skin surface, the drugs subsequently penetrate into the skin in the manner of topically applied substances. Upon accumulation of the chemotherapeutics in the skin the drugs destroy cells and tissue - in the same way as they are supposed to act in cancer cells. Aiming at the development of strategies to illuminate the molecular mechanism underlying the handfoot- syndrome (and, in a second step, strategies to prevent this severe side-effect), it might be important to evaluate the concentration and distribution of chemotherapeutics and antioxidants in the human skin. The latter can be estimated by the carotenoid concentration, as carotenoids serve as marker substances for the dermal antioxidative status.Following the objectives outlined above, this contribution presents a spectroscopic study aiming at the detection and quantification of carotenoids and selected chemotherapeutics in human skin. To this end, spontaneous Raman scattering and coherent anti-Stokes Raman scattering (CARS) microspectroscopy are combined with two-photon excited fluorescence. While the latter technique is Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to your MySPIE To Do List at http://myspie.org and approve or disapprove this submission. Your manuscript will not be published without this approval.restricted to the detection of fluorescent chemotherapeutics, e.g., doxorubicin, the vibrational spectroscopic techniques can - in principle - be applied to any type of analyte molecules. Furthermore, we will present the monitoring of doxorubicin uptake during experiments.

  15. Energy-weighted dynamical scattering simulations of electron diffraction modalities in the scanning electron microscope.

    PubMed

    Pascal, Elena; Singh, Saransh; Callahan, Patrick G; Hourahine, Ben; Trager-Cowan, Carol; Graef, Marc De

    2018-04-01

    Transmission Kikuchi diffraction (TKD) has been gaining momentum as a high resolution alternative to electron back-scattered diffraction (EBSD), adding to the existing electron diffraction modalities in the scanning electron microscope (SEM). The image simulation of any of these measurement techniques requires an energy dependent diffraction model for which, in turn, knowledge of electron energies and diffraction distances distributions is required. We identify the sample-detector geometry and the effect of inelastic events on the diffracting electron beam as the important factors to be considered when predicting these distributions. However, tractable models taking into account inelastic scattering explicitly are lacking. In this study, we expand the Monte Carlo (MC) energy-weighting dynamical simulations models used for EBSD [1] and ECP [2] to the TKD case. We show that the foil thickness in TKD can be used as a means of energy filtering and compare band sharpness in the different modalities. The current model is shown to correctly predict TKD patterns and, through the dictionary indexing approach, to produce higher quality indexed TKD maps than conventional Hough transform approach, especially close to grain boundaries. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  16. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  17. Asymmetric Flow-Field Flow Fractionation (AF4) of Aqueous C60 Aggregates with Dynamic Light Scattering Size and LC-MS

    EPA Science Inventory

    Current methods for the size determination of nanomaterials in aqueous suspension include dynamic or static light scattering and electron or atomic force microscopy techniques. Light scattering techniques are limited by poor resolution and the scattering intensity dependence on p...

  18. Reciprocal space mapping and single-crystal scattering rods.

    PubMed

    Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao

    2005-11-01

    Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.

  19. Quasi-elastic nuclear scattering at high energies

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Townsend, Lawrence W.; Wilson, John W.

    1992-01-01

    The quasi-elastic scattering of two nuclei is considered in the high-energy optical model. Energy loss and momentum transfer spectra for projectile ions are evaluated in terms of an inelastic multiple-scattering series corresponding to multiple knockout of target nucleons. The leading-order correction to the coherent projectile approximation is evaluated. Calculations are compared with experiments.

  20. Laser Light Scattering with Multiple Scattering Suppression Used to Measure Particle Sizes

    NASA Technical Reports Server (NTRS)

    Meyer, William V.; Tin, Padetha; Lock, James A.; Cannell, David S.; Smart, Anthony E.; Taylor, Thomas W.

    1999-01-01

    Laser light scattering is the technique of choice for noninvasively sizing particles in a fluid. The members of the Advanced Technology Development (ATD) project in laser light scattering at the NASA Lewis Research Center have invented, tested, and recently enhanced a simple and elegant way to extend the concentration range of this standard laboratory particle-sizing technique by several orders of magnitude. With this technique, particles from 3 nm to 3 mm can be measured in a solution. Recently, laser light scattering evolved to successfully size particles in both clear solutions and concentrated milky-white solutions. The enhanced technique uses the property of light that causes it to form tall interference patterns at right angles to the scattering plane (perpendicular to the laser beam) when it is scattered from a narrow laser beam. Such multiple-scattered light forms a broad fuzzy halo around the focused beam, which, in turn, forms short interference patterns. By placing two fiber optics on top of each other and perpendicular to the laser beam (see the drawing), and then cross-correlating the signals they produce, only the tall interference patterns formed by singly scattered light are detected. To restate this, unless the two fiber optics see the same interference pattern, the scattered light is not incorporated into the signal. With this technique, only singly scattered light is seen (multiple-scattered light is rejected) because only singly scattered light has an interference pattern tall enough to span both of the fiber-optic pickups. This technique is simple to use, easy to align, and works at any angle. Placing a vertical slit in front of the signal collection fibers enhanced this approach. The slit serves as an optical mask, and it significantly shortens the time needed to collect good data by selectively masking out much of the unwanted light before cross-correlation is applied.

  1. Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance

    NASA Astrophysics Data System (ADS)

    Hugtenburg, Richard P.; Bradley, David A.

    2004-01-01

    The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.

  2. Chiral symmetry constraints on resonant amplitudes

    NASA Astrophysics Data System (ADS)

    Bruns, Peter C.; Mai, Maxim

    2018-03-01

    We discuss the impact of chiral symmetry constraints on the quark-mass dependence of meson resonance pole positions, which are encoded in non-perturbative parametrizations of meson scattering amplitudes. Model-independent conditions on such parametrizations are derived, which are shown to guarantee the correct functional form of the leading quark-mass corrections to the resonance pole positions. Some model amplitudes for ππ scattering, widely used for the determination of ρ and σ resonance properties from results of lattice simulations, are tested explicitly with respect to these conditions.

  3. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  4. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  5. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  6. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  7. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  8. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  9. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    NASA Astrophysics Data System (ADS)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  10. a Single-Exposure Dual-Energy Computed Radiography Technique for Improved Nodule Detection and Classification in Chest Imaging

    NASA Astrophysics Data System (ADS)

    Zink, Frank Edward

    The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.

  11. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  12. Atmospheric monitoring in MAGIC and data corrections

    NASA Astrophysics Data System (ADS)

    Fruck, Christian; Gaug, Markus

    2015-03-01

    A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.

  13. A United Effort for Crystal Growth, Neutron Scattering, and X-ray Scattering Studies of Novel Correlated Electron Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Young S.

    2015-02-12

    The research accomplishments during the award involved experimental studies of correlated electron systems and quantum magnetism. The techniques of crystal growth, neutron scattering, x-ray scattering, and thermodynamic & transport measurements were employed, and graduate students and postdoctoral research associates were trained in these techniques.

  14. Modulated scattering technique in the terahertz domain enabled by current actuated vanadium dioxide switches

    PubMed Central

    Vitale, W. A.; Tamagnone, M.; Émond, N.; Le Drogoff, B.; Capdevila, S.; Skrivervik, A.; Chaker, M.; Mosig, J. R.; Ionescu, A. M.

    2017-01-01

    The modulated scattering technique is based on the use of reconfigurable electromagnetic scatterers, structures able to scatter and modulate an impinging electromagnetic field in function of a control signal. The modulated scattering technique is used in a wide range of frequencies up to millimeter waves for various applications, such as field mapping of circuits or antennas, radio-frequency identification devices and imaging applications. However, its implementation in the terahertz domain remains challenging. Here, we describe the design and experimental demonstration of the modulated scattering technique at terahertz frequencies. We characterize a modulated scatterer consisting in a bowtie antenna loaded with a vanadium dioxide switch, actuated using a continuous current. The modulated scatterer behavior is demonstrated using a time domain terahertz spectroscopy setup and shows significant signal strength well above 0.5 THz, which makes this device a promising candidate for the development of fast and energy-efficient THz communication devices and imaging systems. Moreover, our experiments allowed us to verify the operation of a single micro-meter sized VO2 switch at terahertz frequencies, thanks to the coupling provided by the antenna. PMID:28145523

  15. Density-functional calculations of transport properties in the nondegenerate limit and the role of electron-electron scattering

    DOE PAGES

    Desjarlais, Michael P.; Scullard, Christian R.; Benedict, Lorin X.; ...

    2017-03-13

    We compute electrical and thermal conductivities of hydrogen plasmas in the non-degenerate regime using Kohn-Sham Density Functional Theory (DFT) and an application of the Kubo- Greenwood response formula, and demonstrate that for thermal conductivity, the mean-field treatment of the electron-electron (e-e) interaction therein is insufficient to reproduce the weak-coupling limit obtained by plasma kinetic theories. An explicit e-e scattering correction to the DFT is posited by appealing to Matthiessen's Rule and the results of our computations of conductivities with the quantum Lenard-Balescu (QLB) equation. Further motivation of our correction is provided by an argument arising from the Zubarev quantum kineticmore » theory approach. Significant emphasis is placed on our efforts to produce properly converged results for plasma transport using Kohn-Sham DFT, so that an accurate assessment of the importance and efficacy of our e-e scattering corrections to the thermal conductivity can be made.« less

  16. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  17. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  18. γ-Particle coincidence technique for the study of nuclear reactions

    NASA Astrophysics Data System (ADS)

    Zagatto, V. A. B.; Oliveira, J. R. B.; Allegro, P. R. P.; Chamon, L. C.; Cybulska, E. W.; Medina, N. H.; Ribas, R. V.; Seale, W. A.; Silva, C. P.; Gasques, L. R.; Zahn, G. S.; Genezini, F. A.; Shorto, J. M. B.; Lubian, J.; Linares, R.; Toufen, D. L.; Silveira, M. A. G.; Rossi, E. S.; Nobre, G. P.

    2014-06-01

    The Saci-Perere γ ray spectrometer (located at the Pelletron AcceleratorLaboratory - IFUSP) was employed to implement the γ-particle coincidence technique for the study of nuclear reaction mechanisms. For this, the 18O+110Pd reaction has been studied in the beam energy range of 45-54 MeV. Several corrections to the data due to various effects (energy and angle integrations, beam spot size, γ detector finite size and the vacuum de-alignment) are small and well controlled. The aim of this work was to establish a proper method to analyze the data and identify the reaction mechanisms involved. To achieve this goal the inelastic scattering to the first excited state of 110Pd has been extracted and compared to coupled channel calculations using the São Paulo Potential (PSP), being reasonably well described by it.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, S; Meredith, R; Azure, M

    Purpose: To support the phase I trial for toxicity, biodistribution and pharmacokinetics of intra-peritoneal (IP) 212Pb-TCMC-trastuzumab in patients with HER-2 expressing malignancy. A whole body gamma camera imaging method was developed for estimating amount of 212Pb-TCMC-trastuzumab left in the peritoneal cavity. Methods: {sup 212}Pb decays to {sup 212}Bi via beta emission. {sup 212}Bi emits an alpha particle at an average of 6.1 MeV. The 238.6 keV gamma ray with a 43.6% yield can be exploited for imaging. Initial phantom was made of saline bags with 212Pb. Images were collected for 238.6 keV with a medium energy general purpose collimator. Theremore » are other high energy gamma emissions (e.g. 511keV, 8%; 583 keV, 31%) that penetrate the septae of the collimator and contribute scatter into 238.6 keV. An upper scatter window was used for scatter correction for these high energy gammas. Results: A small source containing 212Pb can be easily visualized. Scatter correction on images of a small 212Pb source resulted in a ∼50% reduction in the full width at tenth maximum (FWTM), while change in full width at half maximum (FWHM) was <10%. For photopeak images, substantial scatter around phantom source extended to > 5 cm outside; scatter correction improved image contrast by removing this scatter around the sources. Patient imaging, in the 1st cohort (n=3) showed little redistribution of 212Pb-TCMC-trastuzumab out of the peritoneal cavity. Compared to the early post-treatment images, the 18-hour post-injection images illustrated the shift to more uniform anterior/posterior abdominal distribution and the loss of intensity due to radioactive decay. Conclusion: Use of medium energy collimator, 15% width of 238.6 keV photopeak, and a 7.5% upper scatter window is adequate for quantification of 212Pb radioactivity inside peritoneal cavity for alpha radioimmunotherapy of ovarian cancer. Research Support: AREVA Med, NIH 1UL1RR025777-01.« less

  20. First measurement of proton's charge form factor at very low Q2 with initial state radiation

    NASA Astrophysics Data System (ADS)

    Mihovilovič, M.; Weber, A. B.; Achenbach, P.; Beranek, T.; Beričič, J.; Bernauer, J. C.; Böhm, R.; Bosnar, D.; Cardinali, M.; Correa, L.; Debenjak, L.; Denig, A.; Distler, M. O.; Esser, A.; Ferretti Bondy, M. I.; Fonvieille, H.; Friedrich, J. M.; Friščić, I.; Griffioen, K.; Hoek, M.; Kegel, S.; Kohl, Y.; Merkel, H.; Middleton, D. G.; Müller, U.; Nungesser, L.; Pochodzalla, J.; Rohrbeck, M.; Sánchez Majos, S.; Schlimme, B. S.; Schoth, M.; Schulz, F.; Sfienti, C.; Širca, S.; Štajner, S.; Thiel, M.; Tyukin, A.; Vanderhaeghen, M.; Weinriefer, M.

    2017-08-01

    We report on a new experimental method based on initial-state radiation (ISR) in e-p scattering, which exploits the radiative tail of the elastic peak to study the properties of electromagnetic processes and to extract the proton charge form factor (GEp) at extremely small Q2. The ISR technique was implemented in an experiment at the three-spectrometer facility of the Mainz Microtron (MAMI). This led to a precise validation of radiative corrections far away from elastic line and provided first measurements of GEp for 0.001 ≤Q2 ≤ 0.004(GeV / c)2.

  1. Utilizing X-ray gas velocity measurements as a new probe of AGN feedback in giant elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Ogorzalek, Anna; Zhuravleva, Irina; Allen, Steven W.; Pinto, Ciro; Werner, Norbert; Mantz, Adam; Canning, Rebecca; Fabian, Andrew C.; Kaastra, Jelle S.; de Plaa, Jelle

    2017-08-01

    Velocity structure of hot atmospheres of massive early-type galaxies remains a key open question in our understanding of galaxy formation and mechanical AGN feedback. Using a combination of resonant scattering and direct line broadening techniques applied to deep XMM-Newton Reflection Grating Spectrometer observations has allowed us to for the first time measure turbulent velocities in the cores of 13 nearby giant early-type galaxies, opening up the possibility of population studies of hot gas motions in such objects. Our method has also been successfully applied to the Hitomi Perseus observation, serving as an independent velocity probe of the cluster ICM. In this talk I will introduce our measurements and discuss their implications on the physics of kinetic AGN feedback. I will also outline future directions, emphasizing the role of resonant scattering in studying gas dynamics of cooler (~1 keV) systems, such as giant galaxies, as well as its importance for the correct interpretation of high resolution X-ray spectra from XARM and Athena.

  2. Wind Speed Measurement from Bistatically Scattered GPS Signals

    NASA Technical Reports Server (NTRS)

    Garrison, James L.; Komjathy, Attila; Zavorotny, Valery U.; Katzberg, Stephen J.

    1999-01-01

    Instrumentation and retrieval algorithms are described which use the forward, or bistatically scattered range-coded signals from the Global Positioning System (GPS) radio navigation system for the measurement of sea surface roughness. This roughness is known to be related directly to the surface wind speed. Experiments were conducted from aircraft along the TOPEX ground track, and over experimental surface truth buoys. These flights used a receiver capable of recording the cross correlation power in the reflected signal. The shape of this power distribution was then compared against analytical models derived from geometric optics. Two techniques for matching these functions were studied. The first recognized the most significant information content in the reflected signal is contained in the trailing edge slope of the waveform. The second attempted to match the complete shape of the waveform by approximating it as a series expansion and obtaining the nonlinear least squares estimate. Discussion is also presented on anomalies in the receiver operation and their identification and correction.

  3. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  4. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  5. Radar images analysis for scattering surfaces characterization

    NASA Astrophysics Data System (ADS)

    Piazza, Enrico

    1998-10-01

    According to the different problems and techniques related to the detection and recognition of airplanes and vehicles moving on the Airport surface, the present work mainly deals with the processing of images gathered by a high-resolution radar sensor. The radar images used to test the investigated algorithms are relative to sequence of images obtained in some field experiments carried out by the Electronic Engineering Department of the University of Florence. The radar is the Ka band radar operating in the'Leonardo da Vinci' Airport in Fiumicino (Rome). The images obtained from the radar scan converter are digitized and putted in x, y, (pixel) co- ordinates. For a correct matching of the images, these are corrected in true geometrical co-ordinates (meters) on the basis of fixed points on an airport map. Correlating the airplane 2-D multipoint template with actual radar images, the value of the signal in the points involved in the template can be extracted. Results for a lot of observation show a typical response for the main section of the fuselage and the wings. For the fuselage, the back-scattered echo is low at the prow, became larger near the center on the aircraft and than it decrease again toward the tail. For the wings the signal is growing with a pretty regular slope from the fuselage to the tips, where the signal is the strongest.

  6. Assessment and correction of turbidity effects on Raman observations of chemicals in aqueous solutions.

    PubMed

    Sinfield, Joseph V; Monwuba, Chike K

    2014-01-01

    Improvements in diode laser, fiber optic, and data acquisition technologies are enabling increased use of Raman spectroscopic techniques for both in lab and in situ water analysis. Aqueous media encountered in the natural environment often contain suspended solids that can interfere with spectroscopic measurements, yet removal of these solids, for example, via filtration, can have even greater adverse effects on the extent to which subsequent measurements are representative of actual field conditions. In this context, this study focuses on evaluation of turbidity effects on Raman spectroscopic measurements of two common environmental pollutants in aqueous solution: ammonium nitrate and trichloroethylene. The former is typically encountered in the runoff from agricultural operations and is a strong scatterer that has no significant influence on the Raman spectrum of water. The latter is a commonly encountered pollutant at contaminated sites associated with degreasing and cleaning operations and is a weak scatterer that has a significant influence on the Raman spectrum of water. Raman observations of each compound in aqueous solutions of varying turbidity created by doping samples with silica flour with grain sizes ranging from 1.6 to 5.0 μm were employed to develop relationships between observed Raman signal strength and turbidity level. Shared characteristics of these relationships were then employed to define generalized correction methods for the effect of turbidity on Raman observations of compounds in aqueous solution.

  7. A Unified Treatment of the Acoustic and Elastic Scattered Waves from Fluid-Elastic Media

    NASA Astrophysics Data System (ADS)

    Denis, Max Fernand

    In this thesis, contributions are made to the numerical modeling of the scattering fields from fluid-filled poroelastic materials. Of particular interest are highly porous materials that demonstrate strong contrast to the saturating fluid. A Biot's analysis of porous medium serves as the starting point of the elastic-solid and pore-fluid governing equations of motion. The longitudinal scattering waves of the elastic-solid mode and the pore-fluid mode are modeled by the Kirchhoff-Helmholtz integral equation. The integral equation is evaluated using a series approximation, describing the successive perturbation of the material contrasts. To extended the series' validity into larger domains, rational fraction extrapolation methods are employed. The local Pade□ approximant procedure is a technique that allows one to extrapolate from a scattered field of small contrast into larger values, using Pade□ approximants. To ensure the accuracy of the numerical model, comparisons are made with the exact solution of scattering from a fluid sphere. Mean absolute error analyses, yield convergent and accurate results. In addition, the numerical model correctly predicts the Bragg peaks for a periodic lattice of fluid spheres. In the case of trabecular bones, the far-field scattering pressure attenuation is a superposition of the elastic-solid mode and the pore-fluid mode generated waves from the surrounding fluid and poroelastic boundaries. The attenuation is linearly dependent with frequency between 0.2 and 0.6MHz. The slope of the attenuation is nonlinear with porosity, and does not reflect the mechanical properties of the trabecular bone. The attenuation shows the anisotropic effects of the trabeculae structure. Thus, ultrasound can possibly be employed to non-invasively predict the principal structural orientation of trabecular bones.

  8. [New type distributed optical fiber temperature sensor (DTS) based on Raman scattering and its' application].

    PubMed

    Wang, Jian-Feng; Liu, Hong-Lin; Zhang, Shu-Qin; Yu, Xiang-Dong; Sun, Zhong-Zhou; Jin, Shang-Zhong; Zhang, Zai-Xuan

    2013-04-01

    Basic principles, development trends and applications status of distributed optical fiber Raman temperature sensor (DTS) are introduced. Performance parameters of DTS system include the sensing optical fiber length, temperature measurement uncertainty, spatial resolution and measurement time. These parameters have a certain correlation and it is difficult to improve them at the same time by single technology. So a variety of key techniques such as Raman amplification, pulse coding technique, Raman related dual-wavelength self-correction technique and embedding optical switching technique are researched to improve the performance of the DTS system. A 1 467 nm continuous laser is used as pump laser and the light source of DTS system (1 550 nm pulse laser) is amplified. When the length of sensing optical fiber is 50 km the Raman gain is about 17 dB. Raman gain can partially compensate the transmission loss of optical fiber, so that the sensing length can reach 50 km. In DTS system using pulse coding technique, pulse laser is coded by 211 bits loop encoder and correlation calculation is used to demodulate temperature. The encoded laser signal is related, whereas the noise is not relevant. So that signal-to-noise ratio (SNR) of DTS system can be improved significantly. The experiments are carried out in DTS system with single mode optical fiber and multimode optical fiber respectively. Temperature measurement uncertainty can all reach 1 degrees C. In DTS system using Raman related dual-wavelength self-correction technique, the wavelength difference of the two light sources must be one Raman frequency shift in optical fiber. For example, wavelength of the main laser is 1 550 nm and wavelength of the second laser must be 1 450 nm. Spatial resolution of DTS system is improved to 2 m by using dual-wavelength self-correction technique. Optical switch is embedded in DTS system, so that the temperature measurement channel multiply extended and the total length of the sensing optical fiber effectively extended. Optical fiber sensor network is composed.

  9. Transient radiative transfer in a scattering slab considering polarization.

    PubMed

    Yi, Hongliang; Ben, Xun; Tan, Heping

    2013-11-04

    The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.

  10. Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited

    DOE PAGES

    Farmer, William A.; Friedman, Alex

    2015-06-18

    Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less

  11. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  12. NEMA NU 4-2008 validation and applications of the PET-SORTEO Monte Carlo simulations platform for the geometry of the Inveon PET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.

    2013-10-01

    Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.

  13. Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism

    NASA Astrophysics Data System (ADS)

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.

    2014-11-01

    The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.

  14. Optical elastic scattering for early label-free identification of clinical pathogens

    NASA Astrophysics Data System (ADS)

    Genuer, Valentin; Gal, Olivier; Méteau, Jérémy; Marcoux, Pierre; Schultz, Emmanuelle; Lacot, Éric; Maurin, Max; Dinten, Jean-Marc

    2016-03-01

    We report here on the ability of elastic light scattering in discriminating Gram+, Gram- and yeasts at an early stage of growth (6h). Our technique is non-invasive, low cost and does require neither skilled operators nor reagents. Therefore it is compatible with automation. It is based on the analysis of the scattering pattern (scatterogram) generated by a bacterial microcolony growing on agar, when placed in the path of a laser beam. Measurements are directly performed on closed Petri dishes. The characteristic features of a given scatterogram are first computed by projecting the pattern onto the Zernike orthogonal basis. Then the obtained data are compared to a database so that machine learning can yield identification result. A 10-fold cross-validation was performed on a database over 8 species (15 strains, 1906 scatterograms), at 6h of incubation. It yielded a 94% correct classification rate between Gram+, Gram- and yeasts. Results can be improved by using a more relevant function basis for projections, such as Fourier-Bessel functions. A fully integrated instrument has been installed at the Grenoble hospital's laboratory of bacteriology and a validation campaign has been started for the early screening of MSSA and MRSA (Staphylococcus aureus, methicillin-resistant S. aureus) carriers. Up to now, all the published studies about elastic scattering were performed in a forward mode, which is restricted to transparent media. However, in clinical diagnostics, most of media are opaque, such as blood-supplemented agar. That is why we propose a novel scheme capable of collecting back-scattered light which provides comparable results.

  15. Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism

    DOE PAGES

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; ...

    2014-11-04

    Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less

  16. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    NASA Astrophysics Data System (ADS)

    Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert

    2018-04-01

    Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  17. Electroweak radiative corrections for polarized Moller scattering at the future 11 GeV JLab experiment

    DOE PAGES

    Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...

    2010-11-19

    We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.

  18. Hadron mass corrections in semi-inclusive deep-inelastic scattering

    DOE PAGES

    Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...

    2015-09-24

    We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.

  19. a Phenomenological Determination of the Pion-Nucleon Scattering Lengths from Pionic Hydrogen

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Wycech, S.

    A model independent expression for the electromagnetic corrections to a phenomenological hadronic pion-nucleon (πN) scattering length ah, extracted from pionic hydrogen, is obtained. In a non-relativistic approach and using an extended charge distribution, these corrections are derived up to terms of order α2 log α in the limit of a short-range hadronic interaction. We infer ahπ ^-p=0.0870(5)m-1π which gives for the πNN coupling through the GMO relation g2π ^± pn/(4π )=14.04(17).

  20. Risk of whole body radiation exposure and protective measures in fluoroscopically guided interventional techniques: a prospective evaluation.

    PubMed

    Manchikanti, Laxmaiah; Cash, Kim A; Moss, Tammy L; Rivera, Jose; Pampati, Vidyasagar

    2003-08-06

    BACKGROUND: Fluoroscopic guidance is frequently utilized in interventional pain management. The major purpose of fluoroscopy is correct needle placement to ensure target specificity and accurate delivery of the injectate. Radiation exposure may be associated with risks to physician, patient and personnel. While there have been many studies evaluating the risk of radiation exposure and techniques to reduce this risk in the upper part of the body, the literature is scant in evaluating the risk of radiation exposure in the lower part of the body. METHODS: Radiation exposure risk to the physician was evaluated in 1156 patients undergoing interventional procedures under fluoroscopy by 3 physicians. Monitoring of scattered radiation exposure in the upper and lower body, inside and outside the lead apron was carried out. RESULTS: The average exposure per procedure was 12.0 PlusMinus; 9.8 seconds, 9.0 PlusMinus; 0.37 seconds, and 7.5 PlusMinus; 1.27 seconds in Groups I, II, and III respectively. Scatter radiation exposure ranged from a low of 3.7 PlusMinus; 0.29 seconds for caudal/interlaminar epidurals to 61.0 PlusMinus; 9.0 seconds for discography. Inside the apron, over the thyroid collar on the neck, the scatter radiation exposure was 68 mREM in Group I consisting of 201 patients who had a total of 330 procedures with an average of 0.2060 mREM per procedure and 25 mREM in Group II consisting of 446 patients who had a total of 662 procedures with average of 0.0378 mREM per procedure. The scatter radiation exposure was 0 mREM in Group III consisting of 509 patients who had a total 827 procedures. Increased levels of exposures were observed in Groups I and II compared to Group III, and Group I compared to Group II.Groin exposure showed 0 mREM exposure in Groups I and II and 15 mREM in Group III. Scatter radiation exposure for groin outside the apron in Group I was 1260 mREM and per procedure was 3.8182 mREM. In Group II the scatter radiation exposure was 400 mREM and with 0.6042 mREM per procedure. In Group III the scatter radiation exposure was 1152 mREM with 1.3930 mREM per procedure. CONCLUSION: Results of this study showed that scatter radiation exposure to both the upper and lower parts of the physician's body is present. Protection was offered by traditional measures to the upper body only.

  1. Absolute activity quantitation from projections using an analytical approach: comparison with iterative methods in Tc-99m and I-123 brain SPECT

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Kijewski, M. F.; Moore, S. C.

    2001-06-01

    Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.

  2. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    NASA Astrophysics Data System (ADS)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  3. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.

  4. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  5. A stochastic method for Brownian-like optical transport calculations in anisotropic biosuspensions and blood

    NASA Astrophysics Data System (ADS)

    Miller, Steven

    1998-03-01

    A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.

  6. Proportional crosstalk correction for the segmented clover at iThemba LABS

    NASA Astrophysics Data System (ADS)

    Bucher, T. D.; Noncolela, S. P.; Lawrie, E. A.; Dinoko, T. R. S.; Easton, J. L.; Erasmus, N.; Lawrie, J. J.; Mthembu, S. H.; Mtshali, W. X.; Shirinda, O.; Orce, J. N.

    2017-11-01

    Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ-ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ-ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ-ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ-ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ-ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained.

  7. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  8. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, Y; Sharp, G

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less

  9. Apple Mealiness Detection Using Hyperspectral Scattering Technique

    USDA-ARS?s Scientific Manuscript database

    Mealiness is a symptom of internal fruit disorder, which is characterized by abnormal softness and lack of free juice in the fruit. This research investigated the potential of hyperspectral scattering technique for detecting mealy apples. Spectral scattering profiles between 600 nm and 1,000 nm were...

  10. Application of electrically invisible antennas to the modulated scatterer technique

    NASA Astrophysics Data System (ADS)

    Crocker, Dylan Andrew

    The Modulated Scatterer Technique (MST) has shown promise for applications in microwave imaging, electric field mapping, and materials characterization. Traditionally, MST scatterers consist of dipole antennas centrally loaded with a lumped element capable of modulation (commonly a PIN diode). By modulating the load element, the signal scattered from the MST scatterer is also modulated. However, due to the small size of such scatterers, it can be difficult to reliably detect the modulated signal. Increasing the modulation depth (a parameter related to how well the scatterer modulates the scattered signal) may improve the detectability of the scattered signal. In an effort to improve the modulation depth of scatterers commonly used in MST, the concept of electrically invisible antennas is applied to the design of these scatterers and is the focus of this work. Electrical invisibility of linear antennas, such as loaded dipoles, can be achieved by loading a scatterer in such a way that, when illuminated by an electromagnetic wave, the integral of the current induced along the length of the scatterer (and hence the scattered field as well) approaches zero. By designing a scatterer to be capable of modulation between visible (scattering) and invisible (minimum scattering) states, the modulation depth may be improved. This thesis presents simulations and measurements of new MST scatterers that have been designed to be electrically invisible during the reverse bias state of the modulated element (i.e., a PIN diode). Further, the scattering during the forward bias state remains the same as that of a traditional MST scatterer, resulting in an increase in modulation depth. This new MST scatterer design technique may also have application in improving the performance of similar sensors such as radio frequency identification (RFID) tags.

  11. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  12. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  13. Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches

    NASA Astrophysics Data System (ADS)

    Hooper, A.

    2006-12-01

    InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.

  14. Clinical Evaluation of 68Ga-PSMA-II and 68Ga-RM2 PET Images Reconstructed With an Improved Scatter Correction Algorithm.

    PubMed

    Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H

    2018-06-06

    Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.

  15. Does Your Optical Particle Counter Measure What You Think it Does? Calibration and Refractive Index Correction Methods.

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas

    2013-04-01

    Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.

  16. GATE Simulations of Small Animal SPECT for Determination of Scatter Fraction as a Function of Object Size

    NASA Astrophysics Data System (ADS)

    Konik, Arda; Madsen, Mark T.; Sunderland, John J.

    2012-10-01

    In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.

  17. Magnetic Field Effects on the Fluctuation Corrections to the Sound Attenuation in Liquid ^3He

    NASA Astrophysics Data System (ADS)

    Zhao, Erhai; Sauls, James A.

    2002-03-01

    We investigated the effect of a magnetic field on the excess sound attenuation due to order parameter fluctuations in bulk liquid ^3He and liquid ^3He in aerogel for temperatures just above the corresponding superfluid transition temperatures. The fluctuation corrections to the acoustic attenuation are sensitive to magnetic field pairbreaking, aerogel scattering as well as the spin correlations of fluctuating pairs. Calculations of the corrections to the zero sound velocity, δ c_0, and attenuation, δα_0, are carried out in the ladder approximation for the singular part of the quasiparticle-quasiparticle scattering amplitude(V. Samalam and J. W. Serene, Phys. Rev. Lett. \\underline41), 497 (1978). as a function of frequency, temperature, impurity scattering and magnetic field strength. The magnetic field suppresses the fluctuation contributions to the attenuation of zero sound. With increasing magnetic field the temperature dependence of δα_0(t) crosses over from δα_0(t) ~√ t to δα_0(t) ~ t, where t=T/Tc -1 is the reduced temperature.

  18. Resonance Raman Spectroscopy of Extreme Nanowires and Other 1D Systems

    PubMed Central

    Smith, David C.; Spencer, Joseph H.; Sloan, Jeremy; McDonnell, Liam P.; Trewhitt, Harrison; Kashtiban, Reza J.; Faulques, Eric

    2016-01-01

    This paper briefly describes how nanowires with diameters corresponding to 1 to 5 atoms can be produced by melting a range of inorganic solids in the presence of carbon nanotubes. These nanowires are extreme in the sense that they are the limit of miniaturization of nanowires and their behavior is not always a simple extrapolation of the behavior of larger nanowires as their diameter decreases. The paper then describes the methods required to obtain Raman spectra from extreme nanowires and the fact that due to the van Hove singularities that 1D systems exhibit in their optical density of states, that determining the correct choice of photon excitation energy is critical. It describes the techniques required to determine the photon energy dependence of the resonances observed in Raman spectroscopy of 1D systems and in particular how to obtain measurements of Raman cross-sections with better than 8% noise and measure the variation in the resonance as a function of sample temperature. The paper describes the importance of ensuring that the Raman scattering is linearly proportional to the intensity of the laser excitation intensity. It also describes how to use the polarization dependence of the Raman scattering to separate Raman scattering of the encapsulated 1D systems from those of other extraneous components in any sample. PMID:27168195

  19. Polarization observables using positron beams

    NASA Astrophysics Data System (ADS)

    Schmidt, Axel

    2018-05-01

    The discrepancy between polarized and unpolarized measurements of the proton's electromagnetic form factors is striking, and suggests that two-photon exchange (TPE) may be playing a larger role in elastic electron-proton scattering than is estimated in standard radiative corrections formulae. While TPE is difficult to calculate in a model-independent way, it can be determined experimentally from asymmetries between electron-proton and positron-proton scattering. The possibility of a polarized positron beam at Jefferson Lab would open the door to measurements of TPE using polarization observables. In these proceedings, I examine the feasibility of measuring three such observables with positron scattering. Polarization-transfer, specifically the ɛ-dependence for fixed Q2, is an excellent test of TPE, and the ability to compare electrons and positrons would lead to a drastic reduction of systematics. However, such a measurement would be severely statistically limited. Normal single-spin asymmetries (SSAs) probe the imaginary part of the TPE amplitude and can be improved by simultaneous measurements with electron and positron beams. Beam-normal SSAs are too small to be measured with the proposed polarized positron beam, but target-normal SSAs could be feasibly measured with unpolarized positrons in the spectrometer halls. This technique should be included in the physics case for developing a positron source for Jefferson Lab.

  20. Author Correction: Induced unconventional superconductivity on the surface states of Bi2Te3 topological insulator.

    PubMed

    Charpentier, Sophie; Galletti, Luca; Kunakova, Gunta; Arpaia, Riccardo; Song, Yuxin; Baghdadi, Reza; Wang, Shu Min; Kalaboukhov, Alexei; Olsson, Eva; Tafuri, Francesco; Golubev, Dmitry; Linder, Jacob; Bauch, Thilo; Lombardi, Floriana

    2018-01-30

    The original version of this Article contained an error in Fig. 6b. In the top scattering process, while the positioning of both arrows was correct, the colours were switched: the first arrow was red and the second arrow was blue, rather than the correct order of blue then red.

  1. Extreme Algal Bloom Detection with MERIS

    NASA Astrophysics Data System (ADS)

    Amin, R.; Gilerson, A.; Gould, R.; Arnone, R.; Ahmed, S.

    2009-05-01

    Harmful Algal Blooms (HAB's) are a major concern all over the world due to their negative impacts on the marine environment, human health, and the economy. Their detection from space still remains a challenge particularly in turbid coastal waters. In this study we propose a simple reflectance band difference approach for use with Medium Resolution Imaging Spectrometer (MERIS) data to detect intense plankton blooms. For convenience we label this approach as the Extreme Bloom Index (EBI) which is defined as EBI = Rrs (709) - Rrs (665). Our initial analysis shows that this band difference approach has some advantages over the band ratio approaches, particularly in reducing errors due to imperfect atmospheric corrections. We also do a comparison between the proposed EBI technique and the Maximum Chlorophyll Index (MCI) Gower technique. Our preliminary result shows that both the EBI and MCI indeces detect intense plankton blooms, however, MCI is more vulnerable in highly scattering waters, giving more positive false alarms than EBI.

  2. The way to universal and correct medical presentation of diagnostic informations for complex spectrophotometry noninvasive medical diagnostic systems

    NASA Astrophysics Data System (ADS)

    Rogatkin, Dmitrii A.; Tchernyi, Vladimir V.

    2003-07-01

    The optical noninvasive diagnostic systems are now widely applied and investigated in different areas of medicine. One of the such techniques is the noninvasive spectrophotometry, the complex diagnostic technique consisting on elastic scattering spectroscopy, absorption spectroscopy, fluorescent diagnostics, photoplethismography, etc. Today a lot of real optical diagnostic systems indicate the technical parameters and physical data only as a result of the diagnostic procedure. But, it is clear that for the medical staff the more convenient medical information is needed. This presentation lights the general way for development a diagnostic system"s software, which can produce the full processing of the diagnostic data from a physical to a medical level. It is shown, that this process is a multilevel (3-level) procedure and the main diagnostic result for noninvasive spectrophotometry methods, the biochemical and morphological composition of the tested tissues, arises in it on a second level of calculations.

  3. Bovine Acellular Dermal Matrix for Levator Lengthening in Thyroid-Related Upper-Eyelid Retraction.

    PubMed

    Sun, Jing; Liu, Xingtong; Zhang, Yidan; Huang, Yazhuo; Zhong, Sisi; Fang, Sijie; Zhuang, Ai; Li, Yinwei; Zhou, Huifang; Fan, Xianqun

    2018-05-02

    BACKGROUND Eyelid retraction is the most common and often the first sign of thyroid eye disease (TED). Upper-eyelid retraction causes both functional and cosmetic problems. In order to correct the position of the upper eyelid, surgery is required. Many procedures have demonstrated good outcomes in mild and moderate cases; however, unpredictable results have been obtained in severe cases. Dryden introduced an upper-eyelid-lengthening procedure, which used scleral grafts, but outcomes were unsatisfactory. A new technique is introduced in this study as a reasonable alternative for TED-related severe upper-eyelid retraction correction. MATERIAL AND METHODS An innovative technique for levator lengthening using bovine acellular dermal matrix as a spacer graft is introduced for severe upper-eyelid retraction secondary to TED. Additionally, 2 modifications were introduced: the fibrous cords scattered on the surface of the levator aponeurosis were excised and the orbital fat pad anterior to the aponeurosis was dissected and sutured into the skin closure in a "skin-tarsus-fat-skin" fashion. RESULTS The modified levator-lengthening surgery was performed on 32 eyelids in 26 patients consisting of 21 women and 5 men (mean age, 37.8 years; age range, 19-67 years). After corrective surgery, the average upper margin reflex distance was lowered from 7.7±0.85 mm to 3.3±0.43 mm. Eighteen cases (69%) had perfect results, while 6 cases (23%) had acceptable results. CONCLUSIONS A modified levator-lengthening procedure using bovine acellular dermal matrix as a spacer graft ameliorated both the symptoms and signs of severe upper-eyelid retraction secondary to TED. This procedure is a reasonable alternative for correction of TED-related severe upper-eyelid retraction.

  4. Bovine Acellular Dermal Matrix for Levator Lengthening in Thyroid-Related Upper-Eyelid Retraction

    PubMed Central

    Sun, Jing; Liu, Xingtong; Zhang, Yidan; Huang, Yazhuo; Zhong, Sisi; Fang, Sijie; Zhuang, Ai; Li, Yinwei; Zhou, Huifang

    2018-01-01

    Background Eyelid retraction is the most common and often the first sign of thyroid eye disease (TED). Upper-eyelid retraction causes both functional and cosmetic problems. In order to correct the position of the upper eyelid, surgery is required. Many procedures have demonstrated good outcomes in mild and moderate cases; however, unpredictable results have been obtained in severe cases. Dryden introduced an upper-eyelid-lengthening procedure, which used scleral grafts, but outcomes were unsatisfactory. A new technique is introduced in this study as a reasonable alternative for TED-related severe upper-eyelid retraction correction. Material/Methods An innovative technique for levator lengthening using bovine acellular dermal matrix as a spacer graft is introduced for severe upper-eyelid retraction secondary to TED. Additionally, 2 modifications were introduced: the fibrous cords scattered on the surface of the levator aponeurosis were excised and the orbital fat pad anterior to the aponeurosis was dissected and sutured into the skin closure in a “skin-tarsus-fat-skin” fashion. Results The modified levator-lengthening surgery was performed on 32 eyelids in 26 patients consisting of 21 women and 5 men (mean age, 37.8 years; age range, 19–67 years). After corrective surgery, the average upper margin reflex distance was lowered from 7.7±0.85 mm to 3.3±0.43 mm. Eighteen cases (69%) had perfect results, while 6 cases (23%) had acceptable results. Conclusions A modified levator-lengthening procedure using bovine acellular dermal matrix as a spacer graft ameliorated both the symptoms and signs of severe upper-eyelid retraction secondary to TED. This procedure is a reasonable alternative for correction of TED-related severe upper-eyelid retraction. PMID:29718902

  5. XUV and x-ray elastic scattering of attosecond electromagnetic pulses on atoms

    NASA Astrophysics Data System (ADS)

    Rosmej, F. B.; Astapenko, V. A.; Lisitsa, V. S.

    2017-12-01

    Elastic scattering of electromagnetic pulses on atoms in XUV and soft x-ray ranges is considered for ultra-short pulses. The inclusion of the retardation term, non-dipole interaction and an efficient scattering tensor approximation allowed studying the scattering probability in dependence of the pulse duration for different carrier frequencies. Numerical calculations carried out for Mg, Al and Fe atoms demonstrate that the scattering probability is a highly nonlinear function of the pulse duration and has extrema for pulse carrier frequencies in the vicinity of the resonance-like features of the polarization charge spectrum. Closed expressions for the non-dipole correction and the angular dependence of the scattered radiation are obtained.

  6. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  7. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  8. Neutron Scattering from Polymers: Five Decades of Developing Possibilities.

    PubMed

    Higgins, J S

    2016-06-07

    The first three decades of my research career closely map the development of neutron scattering techniques for the study of molecular behavior. At the same time, the theoretical understanding of organization and motion of polymer molecules, especially in the bulk state, was developing rapidly and providing many predictions crying out for experimental verification. Neutron scattering is an ideal technique for providing the necessary evidence. This autobiographical essay describes the applications by my research group and other collaborators of increasingly sophisticated neutron scattering techniques to observe and understand molecular behavior in polymeric materials. It has been a stimulating and rewarding journey.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.

    Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less

  10. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  11. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE PAGES

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...

    2016-03-01

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  12. Improved near-field characteristics of phased arrays for assessing concrete and cementitious materials

    NASA Astrophysics Data System (ADS)

    Wooh, Shi-Chang; Azar, Lawrence

    1999-01-01

    The degradation of civil infrastructure has placed a focus on effective nondestructive evaluation techniques to correctly assess the condition of existing concrete structures. Conventional high frequency ultrasonic response are severely affected by scattering and material attenuation, resulting in weak and confusing signal returns. Therefore, low frequency ultrasonic transducers, which avoid this problem of wave attenuation, are commonly used for concrete with limited capabilities. The focus of this research is to ascertain some benefits and limitations of a low frequency ultrasonic phased array transducer. In this paper, we investigate a novel low-frequency ultrasonic phased array and the results of experimental feasibility test for practical condition assessment of concrete structures are reported.

  13. Atomic scale imaging of magnetic circular dichroism by achromatic electron microscopy.

    PubMed

    Wang, Zechao; Tavabi, Amir H; Jin, Lei; Rusz, Ján; Tyutyunnikov, Dmitry; Jiang, Hanbo; Moritomo, Yutaka; Mayer, Joachim; Dunin-Borkowski, Rafal E; Yu, Rong; Zhu, Jing; Zhong, Xiaoyan

    2018-03-01

    In order to obtain a fundamental understanding of the interplay between charge, spin, orbital and lattice degrees of freedom in magnetic materials and to predict and control their physical properties 1-3 , experimental techniques are required that are capable of accessing local magnetic information with atomic-scale spatial resolution. Here, we show that a combination of electron energy-loss magnetic chiral dichroism 4 and chromatic-aberration-corrected transmission electron microscopy, which reduces the focal spread of inelastically scattered electrons by orders of magnitude when compared with the use of spherical aberration correction alone, can achieve atomic-scale imaging of magnetic circular dichroism and provide element-selective orbital and spin magnetic moments atomic plane by atomic plane. This unique capability, which we demonstrate for Sr 2 FeMoO 6 , opens the door to local atomic-level studies of spin configurations in a multitude of materials that exhibit different types of magnetic coupling, thereby contributing to a detailed understanding of the physical origins of magnetic properties of materials at the highest spatial resolution.

  14. Solar cycle dependence of the sun's radius at lambda = 525.0 nm

    NASA Technical Reports Server (NTRS)

    Ulrich, Roger K.; Bertello, L.

    1995-01-01

    The Mount Wilson (California) synoptic program of solar magnetic observations scans the solar disk between 1 and 20 times per day. As part of this program, the radius is determined as an average distance between the image center and the point where the intensity in the FeI line at lambda = 525.0 nm drops to 25 percent of its value at the disk's center. The data base of information was analyzed and corrected for effects such as scattered light and atmospheric reflection. The solar variability and the measurement techniques are described. The observation data sets, the corrections made to the data, and the observed variations, are discussed. It is stated that similar spectral lines at lambda = 525.0 nm, which are common in the solar spectrum, probably exhibit similar radius changes. All portions of the sun are weighted equally so that it is concluded that, within spectral lines, the radiating area of the sun is increased at the solar maximum.

  15. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra.

    PubMed

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen

    2017-07-27

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.

  16. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra

    PubMed Central

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen

    2017-01-01

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450

  17. Methods of InSAR atmosphere correction for volcano activity monitoring

    USGS Publications Warehouse

    Gong, W.; Meyer, F.; Webley, P.W.; Lu, Z.

    2011-01-01

    When a Synthetic Aperture Radar (SAR) signal propagates through the atmosphere on its path to and from the sensor, it is inevitably affected by atmospheric effects. In particular, the applicability and accuracy of Interferometric SAR (InSAR) techniques for volcano monitoring is limited by atmospheric path delays. Therefore, atmospheric correction of interferograms is required to improve the performance of InSAR for detecting volcanic activity, especially in order to advance its ability to detect subtle pre-eruptive changes in deformation dynamics. In this paper, we focus on InSAR tropospheric mitigation methods and their performance in volcano deformation monitoring. Our study areas include Okmok volcano and Unimak Island located in the eastern Aleutians, AK. We explore two methods to mitigate atmospheric artifacts, namely the numerical weather model simulation and the atmospheric filtering using Persistent Scatterer processing. We investigate the capability of the proposed methods, and investigate their limitations and advantages when applied to determine volcanic processes. ?? 2011 IEEE.

  18. Scanning angle Raman spectroscopy: Investigation of Raman scatter enhancement techniques for chemical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Matthew W.

    2013-01-01

    This thesis outlines advancements in Raman scatter enhancement techniques by applying evanescent fields, standing-waves (waveguides) and surface enhancements to increase the generated mean square electric field, which is directly related to the intensity of Raman scattering. These techniques are accomplished by employing scanning angle Raman spectroscopy and surface enhanced Raman spectroscopy. A 1064 nm multichannel Raman spectrometer is discussed for chemical analysis of lignin. Extending dispersive multichannel Raman spectroscopy to 1064 nm reduces the fluorescence interference that can mask the weaker Raman scattering. Overall, these techniques help address the major obstacles in Raman spectroscopy for chemical analysis, which include themore » inherently weak Raman cross section and susceptibility to fluorescence interference.« less

  19. Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Koch, M. B.; Beck, F. B.; Cockrell, C. R.

    1992-01-01

    Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.

  20. CORRECTING FOR INTERPLANETARY SCATTERING IN VELOCITY DISPERSION ANALYSIS OF SOLAR ENERGETIC PARTICLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.

    2015-06-10

    To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energeticmore » protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.« less

  1. Figure correction of a metallic ellipsoidal neutron focusing mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya

    2015-06-15

    An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less

  2. Tables of X-ray absorption corrections and dispersion corrections: the new versus the old

    NASA Astrophysics Data System (ADS)

    Creagh, Dudley

    1990-11-01

    This paper compares the data on X-ray absorption coefficients calculated by Creagh and Hubbell and tabulated in International Tables for Crystallography, vol. C, ed. A.J.C. Wilson (1990) section 4.2.4 [1] with empirical (Saloman, Hubbell and Scofield, At. Data and Nucl. Data Tables 38 (1988) 1, [6]) and semi-empirical (Hubbell, McMaster, Kerr Del Grande and Mallett, in: International Tables for Crystallography, vol. IV, eds. Ibers and Hamilton (Kynoch, Birmingham, 1974) [2]) tabulations as well as the renormalized relativistic Dirac-Hartree-Fock calculations of Scofield [6]. It also makes comparisons of the real part of the dispersion correction ƒ‧(ω, 0) and tabulated in ref. [1], with theoretical data sets (Cromer and Liberman, J. Chem. Phys. 53 (1970) 1891, and Acta Crystallogr. A37 (1981) 267 [4,5]; Wang, Phys. Rev. A34 (1986) 636 [85]; Kissel, in: Workshop Report on New Dimensions in X-ray Scattering, CONF-870459 (Livermore, 1987) p. 9 [86]) and data collected using a variety of experimental techniques. In both cases the data tabulated in ref. [1] is shown to give improved self-consistency and agreement with experiment.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  4. A deconvolution technique to correct deep images of galaxies from instrumental scattered light

    NASA Astrophysics Data System (ADS)

    Karabal, E.; Duc, P.-A.; Kuntschner, H.; Chanial, P.; Cuillandre, J.-C.; Gwyn, S.

    2017-05-01

    Deep imaging of the diffuse light that is emitted by stellar fine structures and outer halos around galaxies is often now used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they trace more ancient mergers. We use images that reach surface brightness limits as low as 28.5-29 mag arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. These were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced using simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique, which is based on PyOperators, is more time efficient than the model-convolution methods that are also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the point spread function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. We show that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, which exhibits a prominent ghost halo for two of them. This method successfully removes this. The library of PSFs (FITS files) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/601/A86

  5. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  6. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  7. Laplace Transform Based Radiative Transfer Studies

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.

    2006-12-01

    Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.

  8. Solar flare ionization in the mesosphere observed by coherent-scatter radar

    NASA Technical Reports Server (NTRS)

    Parker, J. W.; Bowhill, S. A.

    1986-01-01

    The coherent-scatter technique, as used with the Urbana radar, is able to measure relative changes in electron density at one altitude during the progress of a solar flare when that altitude contains a statistically steady turbulent layer. This work describes the analysis of Urbana coherent-scatter data from the times of 13 solar flares in the period from 1978 to 1983. Previous methods of measuring electron density changes in the D-region are summarized. Models of X-ray spectra, photoionization rates, and ion-recombination reaction schemes are reviewed. The coherent-scatter technique is briefly described, and a model is developed which relates changes in scattered power to changes in electron density. An analysis technique is developed using X-ray flux data from geostationary satellites and coherent scatter data from the Urbana radar which empirically distinguishes between proposed D-region ion-chemical schemes, and estimates the nonflare ion-pair production rate.

  9. Three dimensional scattering center imaging techniques

    NASA Technical Reports Server (NTRS)

    Younger, P. R.; Burnside, W. D.

    1991-01-01

    Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.

  10. The MOSDEF Survey: Dissecting the Star Formation Rate versus Stellar Mass Relation Using Hα and Hβ Emission Lines at z ∼ 2

    NASA Astrophysics Data System (ADS)

    Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Kriek, Mariska; Siana, Brian; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan; Price, Sedona H.; de Groot, Laura; Azadi, Mojegan

    2015-12-01

    We present results on the star formation rate (SFR) versus stellar mass (M*) relation (i.e., the “main sequence”) among star-forming galaxies at 1.37 ≤ z ≤ 2.61 using the MOSFIRE Deep Evolution Field (MOSDEF) survey. Based on a sample of 261 galaxies with Hα and Hβ spectroscopy, we have estimated robust dust-corrected instantaneous SFRs over a large range in M* (˜109.5-1011.5 M⊙). We find a correlation between log(SFR(Hα)) and log(M*) with a slope of 0.65 ± 0.08 (0.58 ± 0.10) at 1.4 < z < 2.6 (2.1 < z < 2.6). We find that different assumptions for the dust correction, such as using the color excess of the stellar continuum to correct the nebular lines, sample selection biases against red star-forming galaxies, and not accounting for Balmer absorption, can yield steeper slopes of the log(SFR)-log(M*) relation. Our sample is immune from these biases as it is rest-frame optically selected, Hα and Hβ are corrected for Balmer absorption, and the Hα luminosity is dust corrected using the nebular color excess computed from the Balmer decrement. The scatter of the log(SFR(Hα))-log(M*) relation, after accounting for the measurement uncertainties, is 0.31 dex at 2.1 < z < 2.6, which is 0.05 dex larger than the scatter in log(SFR(UV))-log(M*). Based on comparisons to a simulated SFR-M* relation with some intrinsic scatter, we argue that in the absence of direct measurements of galaxy-to-galaxy variations in the attenuation/extinction curves and the initial mass function, one cannot use the difference in the scatter of the SFR(Hα)- and SFR(UV)-M* relations to constrain the stochasticity of star formation in high-redshift galaxies.

  11. Theory of thermal conductivity in the disordered electron liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwiete, G., E-mail: schwiete@uni-mainz.de; Finkel’stein, A. M.

    2016-03-15

    We study thermal conductivity in the disordered two-dimensional electron liquid in the presence of long-range Coulomb interactions. We describe a microscopic analysis of the problem using the partition function defined on the Keldysh contour as a starting point. We extend the renormalization group (RG) analysis developed for thermal transport in the disordered Fermi liquid and include scattering processes induced by the long-range Coulomb interaction in the sub-temperature energy range. For the thermal conductivity, unlike for the electrical conductivity, these scattering processes yield a logarithmic correction that may compete with the RG corrections. The interest in this correction arises from themore » fact that it violates the Wiedemann–Franz law. We checked that the sub-temperature correction to the thermal conductivity is not modified either by the inclusion of Fermi liquid interaction amplitudes or as a result of the RG flow. We therefore expect that the answer obtained for this correction is final. We use the theory to describe thermal transport on the metallic side of the metal–insulator transition in Si MOSFETs.« less

  12. ARGOS: the laser guide star system for the LBT

    NASA Astrophysics Data System (ADS)

    Rabien, S.; Ageorges, N.; Barl, L.; Beckmann, U.; Blümchen, T.; Bonaglia, M.; Borelli, J. L.; Brynnel, J.; Busoni, L.; Carbonaro, L.; Davies, R.; Deysenroth, M.; Durney, O.; Elberich, M.; Esposito, S.; Gasho, V.; Gässler, W.; Gemperlein, H.; Genzel, R.; Green, R.; Haug, M.; Hart, M. L.; Hubbard, P.; Kanneganti, S.; Masciadri, E.; Noenickx, J.; Orban de Xivry, G.; Peter, D.; Quirrenbach, A.; Rademacher, M.; Rix, H. W.; Salinari, P.; Schwab, C.; Storm, J.; Strüder, L.; Thiel, M.; Weigelt, G.; Ziegleder, J.

    2010-07-01

    ARGOS is the Laser Guide Star adaptive optics system for the Large Binocular Telescope. Aiming for a wide field adaptive optics correction, ARGOS will equip both sides of LBT with a multi laser beacon system and corresponding wavefront sensors, driving LBT's adaptive secondary mirrors. Utilizing high power pulsed green lasers the artificial beacons are generated via Rayleigh scattering in earth's atmosphere. ARGOS will project a set of three guide stars above each of LBT's mirrors in a wide constellation. The returning scattered light, sensitive particular to the turbulence close to ground, is detected in a gated wavefront sensor system. Measuring and correcting the ground layers of the optical distortions enables ARGOS to achieve a correction over a very wide field of view. Taking advantage of this wide field correction, the science that can be done with the multi object spectrographs LUCIFER will be boosted by higher spatial resolution and strongly enhanced flux for spectroscopy. Apart from the wide field correction ARGOS delivers in its ground layer mode, we foresee a diffraction limited operation with a hybrid Sodium laser Rayleigh beacon combination.

  13. Elastic light scattering for clinical pathogens identification: application to early screening of Staphylococcus aureus on specific medium

    NASA Astrophysics Data System (ADS)

    Schultz, E.; Genuer, V.; Marcoux, P.; Gal, O.; Belafdil, C.; Decq, D.; Maurin, Max; Morales, S.

    2018-02-01

    Elastic Light Scattering (ELS) is an innovative technique to identify bacterial pathogens directly on culture plates. Compelling results have already been reported for agri-food applications. Here, we have developed ELS for clinical diagnosis, starting with Staphylococcus aureus early screening. Our goal is to bring a result (positive/negative) after only 6 h of growth to fight surgical-site infections. The method starts with the acquisition of the scattering pattern arising from the interaction between a laser beam and a single bacterial colony growing on a culture medium. Then, the resulting image, considered as the bacterial species signature, is analyzed using statistical learning techniques. We present a custom optical setup able to target bacterial colonies with various sizes (30-500 microns). This system was used to collect a reference dataset of 38 strains of S. aureus and other Staphyloccocus species (5459 images) on ChromIDSAID/ MRSA bi-plates. A validation set from 20 patients has then been acquired and clinically-validated according to chromogenic enzymatic tests. The best correct-identification rate between S. aureus and S. non-aureus (94.7%) has been obtained using a support vector machine classifier trained on a combination of Fourier-Bessel moments and Local- Binary-Patterns extracted features. This statistical model applied to the validation set provided a sensitivity and a specificity of 90.0% and 56.9%, or alternatively, a positive predictive value of 47% and a negative predictive value of 93%. From a clinical point of view, the results head in the right direction and pave the way toward the WHO's requirements for rapid, low-cost, and automated diagnosis tools.

  14. WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Z; Swanson, T; O’Connor, M

    2015-06-15

    Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less

  15. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    PubMed

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  16. Theory of bright-field scanning transmission electron microscopy for tomography

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.

    2005-02-01

    Radiation transport theory is applied to electron microscopy of samples composed of one or more materials. The theory, originally due to Goudsmit and Saunderson, assumes only elastic scattering and an amorphous medium dominated by atomic interactions. For samples composed of a single material, the theory yields reasonable parameter-free agreement with experimental data taken from the literature for the multiple scattering of 300-keV electrons through aluminum foils up to 25μm thick. For thin films, the theory gives a validity condition for Beer's law. For thick films, a variant of Molière's theory [V. G. Molière, Z. Naturforschg. 3a, 78 (1948)] of multiple scattering leads to a form for the bright-field signal for foils in the multiple-scattering regime. The signal varies as [tln(e1-2γt/τ)]-1 where t is the path length of the beam, τ is the mean free path for elastic scattering, and γ is Euler's constant. The Goudsmit-Saunderson solution interpolates numerically between these two limits. For samples with multiple materials, elemental sensitivity is developed through the angular dependence of the scattering. From the elastic scattering cross sections of the first 92 elements, a singular-value decomposition of a vector space spanned by the elastic scattering cross sections minus a delta function shows that there is a dominant common mode, with composition-dependent corrections of about 2%. A mathematically correct reconstruction procedure beyond 2% accuracy requires the acquisition of the bright-field signal as a function of the scattering angle. Tomographic reconstructions are carried out for three singular vectors of a sample problem with four elements Cr, Cu, Zr, and Te. The three reconstructions are presented jointly as a color image; all four elements are clearly identifiable throughout the image.

  17. Metallic scattering lifetime measurements with terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Lea, Graham Bryce

    The momentum scattering lifetime is a fundamental parameter of metallic conduction that can be measured with terahertz time-domain spectroscopy. This technique has an important strength over optical reflectance spectroscopy: it is capable of measuring both the phase and the amplitude of the probing radiation. This allows simultaneous, independent measurements of the scattering lifetime and resistivity. Broadly, it is the precision of the phase measurement that determines the precision of scattering lifetime measurements. This thesis describes milliradian-level phase measurement refinements in the experimental technique and measures the conductivity anisotropy in the correlated electron system CaRuO3. These phase measurement refinements translate to femtosecond-level refinements in scattering lifetime measurements of thin metallic films. Keywords: terahertz time-domain spectroscopy, calcium ruthenate, ruthenium oxides, correlated electrons, experimental technique.

  18. Optical artefact characterization and correction in volumetric scintillation dosimetry

    PubMed Central

    Robertson, Daniel; Hui, Cheukkai; Archambault, Louis; Mohan, Radhe; Beddar, Sam

    2014-01-01

    The goals of this study were (1) to characterize the optical artefacts affecting measurement accuracy in a volumetric liquid scintillation detector, and (2) to develop methods to correct for these artefacts. The optical artefacts addressed were photon scattering, refraction, camera perspective, vignetting, lens distortion, the lens point spread function, stray radiation, and noise in the camera. These artefacts were evaluated by theoretical and experimental means, and specific correction strategies were developed for each artefact. The effectiveness of the correction methods was evaluated by comparing raw and corrected images of the scintillation light from proton pencil beams against validated Monte Carlo calculations. Blurring due to the lens and refraction at the scintillator tank-air interface were found to have the largest effect on the measured light distribution, and lens aberrations and vignetting were important primarily at the image edges. Photon scatter in the scintillator was not found to be a significant source of artefacts. The correction methods effectively mitigated the artefacts, increasing the average gamma analysis pass rate from 66% to 98% for gamma criteria of 2% dose difference and 2 mm distance to agreement. We conclude that optical artefacts cause clinically meaningful errors in the measured light distribution, and we have demonstrated effective strategies for correcting these optical artefacts. PMID:24321820

  19. Airborne Polarized Lidar Detection of Scattering Layers in the Ocean

    NASA Astrophysics Data System (ADS)

    Vasilkov, Alexander P.; Goldin, Yury A.; Gureev, Boris A.; Hoge, Frank E.; Swift, Robert N.; Wright, C. Wayne

    2001-08-01

    A polarized lidar technique based on measurements of waveforms of the two orthogonal-polarized components of the backscattered light pulse is proposed to retrieve vertical profiles of the seawater scattering coefficient. The physical rationale for the polarized technique is that depolarization of backscattered light originating from a linearly polarized laser beam is caused largely by multiple small-angle scattering from particulate matter in seawater. The magnitude of the small-angle scattering is determined by the scattering coefficient. Therefore information on the vertical distribution of the scattering coefficient can be derived potentially from measurements of the timedepth dependence of depolarization in the backscattered laser pulse. The polarized technique was verified by field measurements conducted in the Middle Atlantic Bight of the western North Atlantic Ocean that were supported by in situ measurements of the beam attenuation coefficient. The airborne polarized lidar measured the timedepth dependence of the backscattered laser pulse in two orthogonal-polarized components. Vertical profiles of the scattering coefficient retrieved from the timedepth depolarization of the backscattered laser pulse were compared with measured profiles of the beam attenuation coefficient. The comparison showed that retrieved profiles of the scattering coefficient clearly reproduce the main features of the measured profiles of the beam attenuation coefficient. Underwater scattering layers were detected at depths of 2025 m in turbid coastal waters. The improvement in dynamic range afforded by the polarized lidar technique offers a strong potential benefit for airborne lidar bathymetric applications.

  20. Modeling ultrasonic transient scattering from biological tissues including their dispersive properties directly in the time domain.

    PubMed

    Norton, G V; Novarini, J C

    2007-06-01

    Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.

  1. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  2. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  3. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  4. Plasma characterization using ultraviolet Thomson scattering from ion-acoustic and electron plasma waves (invited).

    PubMed

    Follett, R K; Delettrez, J A; Edgell, D H; Henchen, R J; Katz, J; Myatt, J F; Froula, D H

    2016-11-01

    Collective Thomson scattering is a technique for measuring the plasma conditions in laser-plasma experiments. Simultaneous measurements of ion-acoustic and electron plasma-wave spectra were obtained using a 263.25-nm Thomson-scattering probe beam. A fully reflective collection system was used to record light scattered from electron plasma waves at electron densities greater than 10 21 cm -3 , which produced scattering peaks near 200 nm. An accurate analysis of the experimental Thomson-scattering spectra required accounting for plasma gradients, instrument sensitivity, optical effects, and background radiation. Practical techniques for including these effects when fitting Thomson-scattering spectra are presented and applied to the measured spectra to show the improvements in plasma characterization.

  5. Intermediate energy proton-deuteron elastic scattering

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.

    1973-01-01

    A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.

  6. WE-DE-207B-10: Library-Based X-Ray Scatter Correction for Dedicated Cone-Beam Breast CT: Clinical Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, L; Zhu, L; Vedantham, S

    Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  7. Separating volcanic deformation and atmospheric signals at Mount St. Helens using Persistent Scatterer InSAR

    NASA Astrophysics Data System (ADS)

    Welch, Mark D.; Schmidt, David A.

    2017-09-01

    Over the past two decades, GPS and leveling surveys have recorded cycles of inflation and deflation associated with dome building eruptions at Mount St. Helens. Due to spatial and temporal limitations of the data, it remains unknown whether any deformation occurred prior to the most recent eruption of 2004, information which could help anticipate future eruptions. Interferometric Synthetic Aperture Radar (InSAR), which boasts fine spatial resolution over large areas, has the potential to resolve pre-eruptive deformation that may have occurred, but eluded detection by campaign GPS surveys because it was localized to the edifice or crater. Traditional InSAR methods are challenging to apply in the Cascades volcanic arc because of a combination of environmental factors, and past attempts to observe deformation at Mount St. Helens were unable to make reliable observations in the crater or on much of the edifice. In this study, Persistent Scatterer InSAR, known to mitigate issues of decorrelation caused by environmental factors, is applied to four SAR data sets in an attempt to resolve localized sources of deformation on the volcano between 1995 and 2010. Many interferograms are strongly influenced by phase delay from atmospheric water vapor and require correction, evidenced by a correlation between phase and topography. To assess the bias imposed by the atmosphere, we perform sensitivity tests on a suite of atmospheric correction techniques, including several that rely on the correlation of phase delay to elevation, and explore approaches that directly estimate phase delay using the ERA-Interim and NARR climate reanalysis data sets. We find that different correction methods produce velocities on the edifice of Mount St. Helens that differ by up to 1 cm/yr due to variability in how atmospheric artifacts are treated in individual interferograms. Additionally, simple phase-based techniques run the risk of minimizing any surface deformation signals that may themselves be correlated with elevation. The atmospherically corrected PS InSAR results for data sets overlapping in time are inconsistent with one another, and do not provide conclusive evidence for any pre-eruptive deformation at a broad scale or localized to the crater or edifice leading up to the 2004 eruption. However, we cannot rule out the possibility of deformation less than 1 cm/yr, or discern whether deformation rates increased in the months preceding the eruption. The results do significantly improve the spatial density of observations and our ability to resolve or rule out models for a potential deformation source for the pre-eruptive period.

  8. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

  9. Detecting Forward-Scattered Radio Signals from Atmospheric Meteors Using Low-Cost Software Defined Radio

    ERIC Educational Resources Information Center

    Snjegota, Ana; Rattenbury, Nicholas James

    2017-01-01

    The forward scattering of radio signals from atmospheric meteors is a known technique used to detect meteor trails. This article outlines the project that used the forward-scattering technique to observe the 2015 August, September, and October meteor showers, as well as sporadic meteors, in the Southern Hemisphere. This project can easily be…

  10. Raman scattering in the atmospheres of the major planets

    NASA Technical Reports Server (NTRS)

    Cochran, W. D.; Trafton, L. M.

    1978-01-01

    A technique is developed to calculate the detailed effects of Raman scattering in an inhomogeneous anisotropically scattering atmosphere. The technique is applied to evaluations of Raman scattering by H2 in the atmosphere of the major planets. It is noted that Raman scattering produces an insufficient decrease in the blue and ultraviolet regions to explain the albedos of all planets investigated. For all major planets, the filling-in of solar line cores and the generation of the Raman-shifted ghosts of the Fraunhofer spectrum are observed. With regard to Uranus and Neptune, Raman scattering is seen to exert a major influence on the formation and profile of strong red and near infrared CH4 bands, and Raman scattering by H2 explains the residual intensity in the cores of these bands. Raman scattering by H2 must also be taken into account in the scattering of photons into the cores of saturated absorption bands.

  11. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    NASA Astrophysics Data System (ADS)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  12. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    PubMed

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  13. Neutron-deuteron analyzing power data at En=22.5 MeV

    NASA Astrophysics Data System (ADS)

    Weisel, G. J.; Tornow, W.; Crowell, A. S.; Esterline, J. H.; Hale, G. M.; Howell, C. R.; O'Malley, P. D.; Tompkins, J. R.; Witała, H.

    2014-05-01

    We present measurements of n-d analyzing power, Ay(θ), at En=22.5 MeV. The experiment uses a shielded neutron source which produced polarized neutrons via the 2H(d⃗,n⃗)3He reaction. It also uses a deuterated liquid-scintillator center detector and six pairs of liquid-scintillator neutron side detectors. Elastic neutron scattering events are identified by using time-of-flight techniques and by setting a window in the center detector pulse-height spectrum. The beam polarization is monitored by using a high-pressure helium gas cell and an additional pair of liquid-scintillator side detectors. The n-d Ay(θ) data were corrected for finite-geometry and multiple-scattering effects using a Monte Carlo simulation of the experiment. The 22.5-MeV data demonstrate that the three-nucleon analyzing power puzzle also exists at this energy. They show a significant discrepancy with predictions of high-precision nucleon-nucleon potentials alone or combined with Tucscon-Melbourne or Urbana IX three-nucleon forces, as well as currently available effective-field theory based potentials of next-to-next-to-next-to-leading order.

  14. Cytoskeletal changes in oocytes and early embryos during in vitro fertilization process in mice.

    PubMed

    Gumus, E; Bulut, H E; Kaloglu, C

    2010-02-01

    The cytoskeleton plays crucial roles in the development and fertilization of germ cells and in the early embryo development. The growth, maturation and fertilization of oocytes require an active movement and a correct localization of cellular organelles. This is performed by the re-organization of microtubules and actin filaments. Therefore, the aim of the present study was to determine the changes in cytoskeleton during in vitro fertilization process using appropriate immunofluorescence techniques. While the chromatin content was found to be scattered throughout the nucleus during the oocyte maturation period, it was seen only around nucleolus following the completion of the maturation. Microtubules, during oocyte maturation, were regularly distributed throughout the ooplasm which was then localized in the subcortical region of oocytes. Similarly microfilaments were scattered throughout the ooplasm during the oocyte maturation period whereas they were seen in the subcortical region around the polar body and above the meiotic spindle throughout the late developmental stages. In conclusion, those changes occurred in microtubules and microfilaments might be closely related to the re-organization of the genetic material during the oocyte maturation and early embryo development.

  15. [Research on the measurement range of particle size with total light scattering method in vis-IR region].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Dai, Jing-min

    2008-12-01

    The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.

  16. Measurement of event shape variables in deep inelastic e p scattering

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Beck, H. P.; Beck, M.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Botterweck, F.; Boudry, V.; Bourov, S.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Dollfus, C.; Donovan, K. T.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; İşsever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Rick, H.; Reiss, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stößlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vandenplas, D.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wollatz, H.; Wünsch, E.; ŽáČek, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.

    1997-02-01

    Deep inelastic e p scattering data, taken with the H1 detector at HERA, are used to study the event shape variables thrust, jet broadening and jet mass in the current hemisphere of the Breit frame over a large range of momentum transfers Q between 7 GeV and 100 GeV. The data are compared with results from e+e- experiments. Using second order QCD calculations and an approach to relate hadronisation effects to power corrections an analysis of the Q dependences of the means of the event shape parameters is presented, from which both the power corrections and the strong coupling constant are determined without any assumption on fragmentation models. The power corrections of all event shape variables investigated follow a 1/Q behaviour and can be described by a common parameter α0.

  17. Role of oceanic air bubbles in atmospheric correction of ocean color imagery.

    PubMed

    Yan, Banghua; Chen, Bingquan; Stamnes, Knut

    2002-04-20

    Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere--ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.

  18. Role of oceanic air bubbles in atmospheric correction of ocean color imagery

    NASA Astrophysics Data System (ADS)

    Yan, Banghua; Chen, Bingquan; Stamnes, Knut

    2002-04-01

    Ocean color is the radiance that emanates from the ocean because of scattering by chlorophyll pigments and particles of organic and inorganic origin. Air bubbles in the ocean also scatter light and thus contribute to the water-leaving radiance. This additional water-leaving radiance that is due to oceanic air bubbles could violate the black pixel assumption at near-infrared wavelengths and be attributed to chlorophyll in the visible. Hence, the accuracy of the atmospheric correction required for the retrieval of ocean color from satellite measurements is impaired. A comprehensive radiative transfer code for the coupled atmosphere-ocean system is employed to assess the effect of oceanic air bubbles on atmospheric correction of ocean color imagery. This effect is found to depend on the wavelength-dependent optical properties of oceanic air bubbles as well as atmospheric aerosols.

  19. Aerodynamic measurement techniques. [laser based diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Hunter, W. W., Jr.

    1976-01-01

    Laser characteristics of intensity, monochromatic, spatial coherence, and temporal coherence were developed to advance laser based diagnostic techniques for aerodynamic related research. Two broad categories of visualization and optical measurements were considered, and three techniques received significant attention. These are holography, laser velocimetry, and Raman scattering. Examples of the quantitative laser velocimeter and Raman scattering measurements of velocity, temperature, and density indicated the potential of these nonintrusive techniques.

  20. Spaceborne lidar for cloud monitoring

    NASA Astrophysics Data System (ADS)

    Werner, Christian; Krichbaumer, W.; Matvienko, Gennadii G.

    1994-12-01

    Results of laser cloud top measurements taken from space in 1982 (called PANTHER) are presented. Three sequences of land, water, and cloud data are selected. A comparison with airborne lidar data shows similarities. Using the single scattering lidar equation for these spaceborne lidar measurements one can misinterpret the data if one doesn't correct for multiple scattering.

  1. Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres

    NASA Astrophysics Data System (ADS)

    Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.

    2016-12-01

    Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.

  2. Subleading Regge limit from a soft anomalous dimension

    NASA Astrophysics Data System (ADS)

    Brüser, Robin; Caron-Huot, Simon; Henn, Johannes M.

    2018-04-01

    Wilson lines capture important features of scattering amplitudes, for example soft effects relevant for infrared divergences, and the Regge limit. Beyond the leading power approximation, corrections to the eikonal picture have to be taken into account. In this paper, we study such corrections in a model of massive scattering amplitudes in N=4 super Yang-Mills, in the planar limit, where the mass is generated through a Higgs mechanism. Using known three-loop analytic expressions for the scattering amplitude, we find that the first power suppressed term has a very simple form, equal to a single power law. We propose that its exponent is governed by the anomalous dimension of a Wilson loop with a scalar inserted at the cusp, and we provide perturbative evidence for this proposal. We also analyze other limits of the amplitude and conjecture an exact formula for a total cross-section at high energies.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeylikovich, I.; Xu, M., E-mail: mxu@fairfield.edu

    The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyzemore » theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.« less

  4. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  5. Dynamic coherent backscattering mirror

    NASA Astrophysics Data System (ADS)

    Zeylikovich, I.; Xu, M.

    2016-02-01

    The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyze theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.

  6. Extracting the σ-term from low-energy pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Ruiz de Elvira, Jacobo; Hoferichter, Martin; Kubis, Bastian; Meißner, Ulf-G.

    2018-02-01

    We present an extraction of the pion-nucleon (π N) scattering lengths from low-energy π N scattering, by fitting a representation based on Roy-Steiner equations to the low-energy data base. We show that the resulting values confirm the scattering-length determination from pionic atoms, and discuss the stability of the fit results regarding electromagnetic corrections and experimental normalization uncertainties in detail. Our results provide further evidence for a large π N σ-term, {σ }π N=58(5) {{MeV}}, in agreement with, albeit less precise than, the determination from pionic atoms.

  7. The impact of vibrational Raman scattering of air on DOAS measurements of atmospheric trace gases

    NASA Astrophysics Data System (ADS)

    Lampel, J.; Frieß, U.; Platt, U.

    2015-09-01

    In remote sensing applications, such as differential optical absorption spectroscopy (DOAS), atmospheric scattering processes need to be considered. After inelastic scattering on N2 and O2 molecules, the scattered photons occur as additional intensity at a different wavelength, effectively leading to "filling-in" of both solar Fraunhofer lines and absorptions of atmospheric constituents, if the inelastic scattering happens after the absorption. Measured spectra in passive DOAS applications are typically corrected for rotational Raman scattering (RRS), also called Ring effect, which represents the main contribution to inelastic scattering. Inelastic scattering can also occur in liquid water, and its influence on DOAS measurements has been observed over clear ocean water. In contrast to that, vibrational Raman scattering (VRS) of N2 and O2 has often been thought to be negligible, but it also contributes. Consequences of VRS are red-shifted Fraunhofer structures in scattered light spectra and filling-in of Fraunhofer lines, additional to RRS. At 393 nm, the spectral shift is 25 and 40 nm for VRS of O2 and N2, respectively. We describe how to calculate VRS correction spectra according to the Ring spectrum. We use the VRS correction spectra in the spectral range of 420-440 nm to determine the relative magnitude of the cross-sections of VRS of O2 and N2 and RRS of air. The effect of VRS is shown for the first time in spectral evaluations of Multi-Axis DOAS data from the SOPRAN M91 campaign and the MAD-CAT MAX-DOAS intercomparison campaign. The measurements yield in agreement with calculated scattering cross-sections that the observed VRS(N2) cross-section at 393 nm amounts to 2.3 ± 0.4 % of the cross-section of RRS at 433 nm under tropospheric conditions. The contribution of VRS(O2) is also found to be in agreement with calculated scattering cross-sections. It is concluded, that this phenomenon has to be included in the spectral evaluation of weak absorbers as it reduces the measurement error significantly and can cause apparent differential optical depth of up to 3 ×10-4. Its influence on the spectral retrieval of IO, glyoxal, water vapour and NO2 in the blue wavelength range is evaluated for M91. For measurements with a large Ring signal a significant and systematic bias of NO2 dSCDs (differential slant column densities) up to (-3.8 ± 0.4) × 1014 molec cm-2 is observed if this effect is not considered. The effect is typically negligible for DOAS fits with an RMS (root mean square) larger than 4 × 10-4.

  8. Multichannel forward scattering meter for oceanography

    NASA Technical Reports Server (NTRS)

    Mccluney, W. R.

    1974-01-01

    An instrument was designed and built that measures the light scattered at several angles in the forward direction simultaneously. The instrument relies on an optical multiplexing technique for frequency encoding of the different channels suitable for detection by a single photodetector. A Mie theory computer program was used to calculate the theoretical volume scattering function for a suspension of polystyrene latex spheres. The agreement between the theoretical and experimental volume scattering functions is taken as a verification of the calibration technique used.

  9. Assessment of the Subgrid-Scale Models at Low and High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Horiuti, K.

    1996-01-01

    Accurate SGS models must be capable of correctly representing the energy transfer between GS and SGS. Recent direct assessment of the energy transfer carried out using direct numerical simulation (DNS) data for wall-bounded flows revealed that the energy exchange is not unidirectional. Although GS kinetic energy is transferred to the SGS (forward scatter (F-scatter) on average, SGS energy is also transferred to the GS. The latter energy exchange (backward scatter (B-scatter) is very significant, i.e., the local energy exchange can be backward nearly as often as forward and the local rate of B-scatter is considerably higher than the net rate of energy dissipation.

  10. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  11. Absorption Filter Based Optical Diagnostics in High Speed Flows

    NASA Technical Reports Server (NTRS)

    Samimy, Mo; Elliott, Gregory; Arnette, Stephen

    1996-01-01

    Two major regimes where laser light scattered by molecules or particles in a flow contains significant information about the flow are Mie scattering and Rayleigh scattering. Mie scattering is used to obtain only velocity information, while Rayleigh scattering can be used to measure both the velocity and the thermodynamic properties of the flow. Now, recently introduced (1990, 1991) absorption filter based diagnostic techniques have started a new era in flow visualization, simultaneous velocity and thermodynamic measurements, and planar velocity measurements. Using a filtered planar velocimetry (FPV) technique, we have modified the optically thick iodine filter profile of Miles, et al., and used it in the pressure-broaden regime which accommodates measurements in a wide range of velocity applications. Measuring velocity and thermodynamic properties simultaneously, using absorption filtered based Rayleigh scattering, involves not only the measurement of the Doppler shift, but also the spectral profile of the Rayleigh scattering signal. Using multiple observation angles, simultaneous measurement of one component velocity and thermodynamic properties in a supersonic jet were measured. Presently, the technique is being extended for simultaneous measurements of all three components of velocity and thermodynamic properties.

  12. Plasma characterization using ultraviolet Thomson scattering from ion-acoustic and electron plasma waves (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K., E-mail: rfollett@lle.rochester.edu; Delettrez, J. A.; Edgell, D. H.

    2016-11-15

    Collective Thomson scattering is a technique for measuring the plasma conditions in laser-plasma experiments. Simultaneous measurements of ion-acoustic and electron plasma-wave spectra were obtained using a 263.25-nm Thomson-scattering probe beam. A fully reflective collection system was used to record light scattered from electron plasma waves at electron densities greater than 10{sup 21} cm{sup −3}, which produced scattering peaks near 200 nm. An accurate analysis of the experimental Thomson-scattering spectra required accounting for plasma gradients, instrument sensitivity, optical effects, and background radiation. Practical techniques for including these effects when fitting Thomson-scattering spectra are presented and applied to the measured spectra tomore » show the improvements in plasma characterization.« less

  13. Neutron scattering for the analysis of biological structures. Brookhaven symposia in biology. Number 27

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenborn, B P

    1976-01-01

    Sessions were included on neutron scattering and biological structure analysis, protein crystallography, neutron scattering from oriented systems, solution scattering, preparation of deuterated specimens, inelastic scattering, data analysis, experimental techniques, and instrumentation. Separate entries were made for the individual papers.

  14. Nondestructive prediction of pork freshness parameters using multispectral scattering images

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu

    2012-05-01

    Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.

  15. High-fidelity artifact correction for cone-beam CT imaging of the brain

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

  16. A finite difference-time domain technique for modeling narrow apertures in conducting scatterers

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    The finite difference-time domain (FDTD) technique has proven to be a valuable tool for the calculation of the transient and steady state scattering characteristics of relatively complex scatterer and source configurations. In spite of its usefulness, it exhibits serious deficiencies when used to analyze geometries that contain fine detail. An FDTD technique is described that utilizes Babinet's principle to decouple the regions on both sides of the aperture. The result is an FDTD technique that is capable of modeling apertures that are much smaller than the spatial grid used in the analysis and yet is not perturbed by numerical noise when used in the 'scattered field' mode. Numerical results are presented that show the field penetration through cavity-backed apertures that are much smaller than the spatial grid used during the solution.

  17. Model-based review of Doppler global velocimetry techniques with laser frequency modulation

    NASA Astrophysics Data System (ADS)

    Fischer, Andreas

    2017-06-01

    Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution, when operating at the maximal frame rate of the camera. DGV without laser frequency modulation then provides the highest temporal resolutions and is not sensitive with respect to temporal variations but with respect to spatial variations of the scattered light intensity. In contrast to this, all DGV variants suffer from velocity variations during the measurement. In summary, the experimental conditions and the measurement task finally decide about the ideal choice from the reviewed DGV methods.

  18. Precision determination of electron scattering angle by differential nuclear recoil energy method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liyanage, N.; Saenboonruang, K.

    2015-12-01

    The accurate determination of the scattered electron angle is crucial to electron scattering experiments, both with open-geometry large-acceptance spectrometers and ones with dipole-type magnetic spectrometers for electron detection. In particular, for small central-angle experiments using dipole-type magnetic spectrometers, in which surveys are used to measure the spectrometer angle with respect to the primary electron beam, the importance of the scattering angle determination is emphasized. However, given the complexities of large experiments and spectrometers, the accuracy of such surveys is limited and insufficient to meet demands of some experiments. In this article, we present a new technique for determination of themore » electron scattering angle based on an accurate measurement of the primary beam energy and the principle of differential nuclear recoil. This technique was used to determine the scattering angle for several experiments carried out at the Experimental Hall A, Jefferson Lab. Results have shown that the new technique greatly improved the accuracy of the angle determination compared to surveys.« less

  19. Precision Determination of Electron Scattering Angle by Differential Nuclear Recoil Energy Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liyanage, Nilanga; Saenboonruang, Kiadtisak

    2015-09-01

    The accurate determination of the scattered electron angle is crucial to electron scattering experiments, both with open-geometry large-acceptance spectrometers and ones with dipole-type magnetic spectrometers for electron detection. In particular, for small central-angle experiments using dipole-type magnetic spectrometers, in which surveys are used to measure the spectrometer angle with respect to the primary electron beam, the importance of the scattering angle determination is emphasized. However, given the complexities of large experiments and spectrometers, the accuracy of such surveys is limited and insufficient to meet demands of some experiments. In this article, we present a new technique for determination of themore » electron scattering angle based on an accurate measurement of the primary beam energy and the principle of differential nuclear recoil. This technique was used to determine the scattering angle for several experiments carried out at the Experimental Hall A, Jefferson Lab. Results have shown that the new technique greatly improved the accuracy of the angle determination compared to surveys.« less

  20. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  1. Effect of centerbody scattering on propeller noise

    NASA Technical Reports Server (NTRS)

    Glegg, Stewart A. L.

    1991-01-01

    This paper describes how the effect of acoustic scattering from the hub or centerbody of a propeller will affect the far-field noise levels. A simple correction to Gutin's formula for steady loading noise is given. This is a maximum for the lower harmonics but has a negligible effect on the higher frequency components that are important subjectively. The case of a blade vortex interaction is also considered, and centerbody scattering is shown to have a significant effect on the acoustic far field.

  2. Advanced DPSM approach for modeling ultrasonic wave scattering in an arbitrary geometry

    NASA Astrophysics Data System (ADS)

    Yadav, Susheel K.; Banerjee, Sourav; Kundu, Tribikram

    2011-04-01

    Several techniques are used to diagnose structural damages. In the ultrasonic technique structures are tested by analyzing ultrasonic signals scattered by damages. The interpretation of these signals requires a good understanding of the interaction between ultrasonic waves and structures. Therefore, researchers need analytical or numerical techniques to have a clear understanding of the interaction between ultrasonic waves and structural damage. However, modeling of wave scattering phenomenon by conventional numerical techniques such as finite element method requires very fine mesh at high frequencies necessitating heavy computational power. Distributed point source method (DPSM) is a newly developed robust mesh free technique to simulate ultrasonic, electrostatic and electromagnetic fields. In most of the previous studies the DPSM technique has been applied to model two dimensional surface geometries and simple three dimensional scatterer geometries. It was difficult to perform the analysis for complex three dimensional geometries. This technique has been extended to model wave scattering in an arbitrary geometry. In this paper a channel section idealized as a thin solid plate with several rivet holes is formulated. The simulation has been carried out with and without cracks near the rivet holes. Further, a comparison study has been also carried out to characterize the crack. A computer code has been developed in C for modeling the ultrasonic field in a solid plate with and without cracks near the rivet holes.

  3. Relative importance of multiple scattering by air molecules and aerosols in forming the atmospheric path radiance in the visible and near-infrared parts of the spectrum.

    PubMed

    Antoine, D; Morel, A

    1998-04-20

    Single and multiple scattering by molecules or by atmospheric aerosols only (homogeneous scattering), and heterogeneous scattering by aerosols and molecules, are recorded in Monte Carlo simulations. It is shown that heterogeneous scattering (1) always contributes significantly to the path reflectance (rho(path)), (2) is realized at the expense of homogeneous scattering, (3) decreases when aerosols are absorbing, and (4) introduces deviations in the spectral dependencies of reflectances compared with the Rayleigh exponent and the aerosol angstrom exponent. The ratio of rho(path) to the Rayleigh reflectance for an aerosol-free atmosphere is linearly related to the aerosol optical thickness. This result provides a basis for a new scheme for atmospheric correction of remotely sensed ocean color observations.

  4. Trans-dimensional joint inversion of seabed scattering and reflection data.

    PubMed

    Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2013-03-01

    This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.

  5. Rapid determination of crocins in saffron by near-infrared spectroscopy combined with chemometric techniques

    NASA Astrophysics Data System (ADS)

    Li, Shuailing; Shao, Qingsong; Lu, Zhonghua; Duan, Chengli; Yi, Haojun; Su, Liyang

    2018-02-01

    Saffron is an expensive spice. Its primary effective constituents are crocin I and II, and the contents of these compounds directly affect the quality and commercial value of saffron. In this study, near-infrared spectroscopy was combined with chemometric techniques for the determination of crocin I and II in saffron. Partial least squares regression models were built for the quantification of crocin I and II. By comparing different spectral ranges and spectral pretreatment methods (no pretreatment, vector normalization, subtract a straight line, multiplicative scatter correction, minimum-maximum normalization, eliminate the constant offset, first derivative, and second derivative), optimum models were developed. The root mean square error of cross-validation values of the best partial least squares models for crocin I and II were 1.40 and 0.30, respectively. The coefficients of determination for crocin I and II were 93.40 and 96.30, respectively. These results show that near-infrared spectroscopy can be combined with chemometric techniques to determine the contents of crocin I and II in saffron quickly and efficiently.

  6. Scattering matrix elements of biological particles measured in a flow through system: theory and practice.

    PubMed

    Sloot, P M; Hoekstra, A G; van der Liet, H; Figdor, C G

    1989-05-15

    Light scattering techniques (including depolarization experiments) applied to biological cells provide a fast nondestructive probe that is very sensitive to small morphological differences. Until now quantitative measurement of these scatter phenomena were only described for particles in suspension. In this paper we discuss the symmetry conditions applicable to the scattering matrices of monodisperse biological cells in a flow cytometer and provide evidence that quantitative measurement of the elements of these scattering matrices is possible in flow through systems. Two fundamental extensions to the theoretical description of conventional scattering experiments are introduced: large cone integration of scattering signals and simultaneous implementation of the localization principle to account for scattering by a sharply focused laser beam. In addition, a specific calibration technique is proposed to account for depolarization effects of the highly specialized optics applied in flow through equipment.

  7. [Primary Study on Predicting the Termination of Paroxysmal Atrial Fibrillation Based on a Novel RdR RR Intervals Scatter Plot].

    PubMed

    Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia

    2015-08-01

    Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.

  8. Stereo-tomography in triangulated models

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  9. Multilayered phantoms with tunable optical properties for a better understanding of light/tissue interactions

    NASA Astrophysics Data System (ADS)

    Roig, Blandine; Koenig, Anne; Perraut, François; Piot, Olivier; Vignoud, Séverine; Lavaud, Jonathan; Manfait, Michel; Dinten, Jean-Marc

    2015-03-01

    Light/tissue interactions, like diffuse reflectance, endogenous fluorescence and Raman scattering, are a powerful means for providing skin diagnosis. Instrument calibration is an important step. We thus developed multilayered phantoms for calibration of optical systems. These phantoms mimic the optical properties of biological tissues such as skin. Our final objective is to better understand light/tissue interactions especially in the case of confocal Raman spectroscopy. The phantom preparation procedure is described, including the employed method to obtain a stratified object. PDMS was chosen as the bulk material. TiO2 was used as light scattering agent. Dye and ink were adopted to mimic, respectively, oxy-hemoglobin and melanin absorption spectra. By varying the amount of the incorporated components, we created a material with tunable optical properties. Monolayer and multilayered phantoms were designed to allow several characterization methods. Among them, we can name: X-ray tomography for structural information; Diffuse Reflectance Spectroscopy (DRS) with a homemade fibered bundle system for optical characterization; and Raman depth profiling with a commercial confocal Raman microscope for structural information and for our final objective. For each technique, the obtained results are presented and correlated when possible. A few words are said on our final objective. Raman depth profiles of the multilayered phantoms are distorted by elastic scattering. The signal attenuation through each single layer is directly dependent on its own scattering property. Therefore, determining the optical properties, obtained here with DRS, is crucial to properly correct Raman depth profiles. Thus, it would be permitted to consider quantitative studies on skin for drug permeation follow-up or hydration assessment, for instance.

  10. Proton and neutron electromagnetic form factors and uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Zhihong; Arrington, John; Hill, Richard J.

    We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.

  11. Proton and neutron electromagnetic form factors and uncertainties

    DOE PAGES

    Ye, Zhihong; Arrington, John; Hill, Richard J.; ...

    2017-12-06

    We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.

  12. SU-D-12A-07: Optimization of a Moving Blocker System for Cone-Beam Computed Tomography Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Yan, H; Jia, X

    2014-06-01

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less

  13. Development of a calibration protocol for quantitative imaging for molecular radiotherapy dosimetry

    NASA Astrophysics Data System (ADS)

    Wevrett, J.; Fenwick, A.; Scuffham, J.; Nisbet, A.

    2017-11-01

    Within the field of molecular radiotherapy, there is a significant need for standardisation in dosimetry, in both quantitative imaging and dosimetry calculations. Currently, there are a wide range of techniques used by different clinical centres and as a result there is no means to compare patient doses between centres. To help address this need, a 3 year project was funded by the European Metrology Research Programme, and a number of clinical centres were involved in the project. One of the required outcomes of the project was to develop a calibration protocol for three dimensional quantitative imaging of volumes of interest. Two radionuclides were selected as being of particular interest: iodine-131 (131I, used to treat thyroid disorders) and lutetium-177 (177Lu, used to treat neuroendocrine tumours). A small volume of activity within a scatter medium (water), representing a lesion within a patient body, was chosen as the calibration method. To ensure ease of use in clinical centres, an "off-the-shelf" solution was proposed - to avoid the need for in-house manufacturing. The BIODEX elliptical Jaszczak phantom and 16 ml fillable sphere were selected. The protocol was developed for use on SPECT/CT gamma cameras only, where the CT dataset would be used to correct the imaging data for attenuation of the emitted photons within the phantom. The protocol corrects for scatter of emitted photons using the triple energy window correction technique utilised by most clinical systems. A number of clinical systems were tested in the development of this protocol, covering the major manufacturers of gamma camera generally used in Europe. Initial imaging was performed with 131I and 177Lu at a number of clinical centres, but due to time constraints in the project, some acquisitions were performed with 177Lu only. The protocol is relatively simplistic, and does not account for the effects of dead-time in high activity patients, the presence of background activity surrounding volumes of interest or the partial volume effect of imaging lesions smaller than 16 ml. The development of this simple protocol demonstrates that it is possible to produce a standardised quantitative imaging protocol for molecular radiotherapy dosimetry. However, the protocol needs further development to expand it to incorporate other radionuclides, and to account for the effects that have been disregarded in this initial version.

  14. Apparatus and methods for determining gas saturation and porosity of a formation penetrated by a gas filled or liquid filled borehole

    DOEpatents

    Wilson, Robert D.

    2001-03-27

    Methods and apparatus are disclosed for determining gas saturation, liquid saturation, porosity and density of earth formations penetrated by a well borehole. Determinations are made from measures of fast neutron and inelastic scatter gamma radiation induced by a pulsed, fast neutron source. The system preferably uses two detectors axially spaced from the neutron source. One detector is preferably a scintillation detector responsive to gamma radiation, and a second detector is preferably an organic scintillator responsive to both neutron and gamma radiation. The system can be operated in cased boreholes which are filled with either gas or liquid. Techniques for correcting all measurements for borehole conditions are disclosed.

  15. Early Breast Cancer Detection by Ultrawide Band Imaging with Dispersion Consideration

    NASA Astrophysics Data System (ADS)

    Xiao, Xia; Kikkawa, Takamaro

    2008-04-01

    Ultrawide band (UWB) microwave imaging is a promising method for early-stage breast cancer detection based on the large contrast of electric parameters between the tumor and the normal breast tissue. The tumor can be detected by analyzing the reflection and scattering behavior of the UWB microwave propagating in the breast. In this study, the tumor location is determined by comparing the waveforms resulted from the tumor-containing and tumor-free breasts. The frequency dispersive characteristics of the fatty breast tissue, skin and tumor are considered in the study to approach the actual electrical properties of the breast. The correct location and size are visualized for an early-stage tumor embedded in the breast using the principle of a confocal microwave imaging technique.

  16. Scanning of Adsorption Hysteresis In Situ with Small Angle X-Ray Scattering

    PubMed Central

    Mitropoulos, Athanasios Ch.; Favvas, Evangelos P.; Stefanopoulos, Konstantinos L.; Vansant, Etienne F.

    2016-01-01

    Everett’s theorem-6 of the domain theory was examined by conducting adsorption in situ with small angle x-ray scattering (SAXS) supplemented by the contrast matching technique. The study focuses on the spectrum differences of a point to which the system arrives from different scanning paths. It is noted that according to this theorem at a common point the system has similar macroscopic properties. Furthermore it was examined the memory string of the system. We concluded that opposite to theorem-6: a) at a common point the system can reach in a finite (not an infinite) number of ways, b) a correction for the thickness of the adsorbed film prior to capillary condensation is necessary, and c) the scattering curves although at high-Q values coincide, at low-Q values are different indicating different microscopic states. That is, at a common point the system holds different metastable states sustained by hysteresis effects. These metastable states are the ones which highlight the way of a system back to a return point memory (RPM). Entering the hysteresis loop from different RPMs different histories are implanted to the paths toward the common point. Although in general the memory points refer to relaxation phenomena, they also constitute a characteristic feature of capillary condensation. Analogies of the no-passing rule and the adiabaticity assumption in the frame of adsorption hysteresis are discussed. PMID:27741263

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, A. A.; Savelova, E. P., E-mail: ka98@mail.ru

    The problem of free-particle scattering on virtual wormholes is considered. It is shown that, for all types of relativistic fields, this scattering leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. A nonlinear correction that describes the back reaction of particles to the vacuum distribution of virtual wormholes is also obtained.

  18. Limitations on near-surface correction for multicomponent offset VSP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macbeth, C.; Li, X.Y.; Horne, S.

    1994-12-31

    Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less

  19. Calibration correction of an active scattering spectrometer probe to account for refractive index of stratospheric aerosols

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.

    1990-01-01

    The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.

  20. The refractive index in electron microscopy and the errors of its approximations.

    PubMed

    Lentzen, M

    2017-05-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Preliminary Analysis of the Performance of the Landsat 8/OLI Land Surface Reflectance Product

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Claverie, Martin; Franch, Belen

    2016-01-01

    The surface reflectance, i.e., satellite derived top of atmosphere (TOA) reflectance corrected for the temporally, spatially and spectrally varying scattering and absorbing effects of atmospheric gases and aerosols, is needed to monitor the land surface reliably. For this reason, the surface reflectance, and not TOA reflectance, is used to generate the greater majority of global land products, for example, from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors. Even if atmospheric effects are minimized by sensor design, atmospheric effects are still challenging to correct. In particular, the strong impact of aerosols in the visible and near infrared spectral range can be difficult to correct, because they can be highly discrete in space and time (e.g., smoke plumes) and because of the complex scattering and absorbing properties of aerosols that vary spectrally and with aerosol size, shape, chemistry and density.

  2. Neutron scattering from 208Pb at 30.4 and 40.0 MeV and isospin dependence of the nucleon optical potential

    NASA Astrophysics Data System (ADS)

    Devito, R. P.; Khoa, Dao T.; Austin, Sam M.; Berg, U. E. P.; Loc, Bui Minh

    2012-02-01

    Background: Analysis of data involving nuclei far from stability often requires the optical potential (OP) for neutron scattering. Because neutron data are seldom available, whereas proton scattering data are more abundant, it is useful to have estimates of the difference of the neutron and proton optical potentials. This information is contained in the isospin dependence of the nucleon OP. Here we attempt to provide it for the nucleon-208Pb system.Purpose: The goal of this paper is to obtain accurate n+208Pb scattering data and use it, together with existing p+208Pb and 208Pb(p,n)208BiIAS* data, to obtain an accurate estimate of the isospin dependence of the nucleon OP at energies in the 30-60-MeV range.Method: Cross sections for n+208Pb scattering were measured at 30.4 and 40.0 MeV, with a typical relative (normalization) accuracy of 2-4% (3%). An angular range of 15∘ to 130∘ was covered using the beam-swinger time-of-flight system at Michigan State University. These data were analyzed by a consistent optical-model study of the neutron data and of elastic p+208Pb scattering at 45 and 54 MeV. These results were combined with a coupled-channel analysis of the 208Pb(p,n) reaction at 45 MeV, exciting the 0+ isobaric analog state (IAS) in 208Bi.Results: The new data and analysis give an accurate estimate of the isospin impurity of the nucleon-208Pb OP at 30.4 MeV caused by the Coulomb correction to the proton OP. The corrections to the real proton OP given by the CH89 global systematics were found to be only a few percent, whereas for the imaginary potential it was greater than 20% at the nuclear surface. On the basis of the analysis of the measured elastic n+208Pb data at 40 MeV, a Coulomb correction of similar strength and shape was also predicted for the p+208Pb OP at energies around 54 MeV.Conclusions: Accurate neutron scattering data can be used in combination with proton scattering data and (p,n) charge exchange data leading to the IAS to obtain reliable estimates of the isospin impurity of the nucleon OP.

  3. 3D Tomographic SAR Imaging in Densely Vegetated Mountainous Rural Areas in China and Sweden

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J. P., , Prof

    2017-12-01

    3D SAR Tomography (TomoSAR) and 4D SAR Differential Tomography (Diff-TomoSAR) exploit multi-baseline SAR data stacks to create an important new innovation of SAR Interferometry, to unscramble complex scenes with multiple scatterers mapped into the same SAR cell. In addition to this 3-D shape reconstruction and deformation solution in complex urban/infrastructure areas, and recent cryospheric ice investigations, emerging tomographic remote sensing applications include forest applications, e.g. tree height and biomass estimation, sub-canopy topographic mapping, and even search, rescue and surveillance. However, these scenes are characterized by temporal decorrelation of scatterers, orbital, tropospheric and ionospheric phase distortion and an open issue regarding possible height blurring and accuracy losses for TomoSAR applications particularly in densely vegetated mountainous rural areas. Thus, it is important to develop solutions for temporal decorrelation, orbital, tropospheric and ionospheric phase distortion.We report here on 3D imaging (especially in vertical layers) over densely vegetated mountainous rural areas using 3-D SAR imaging (SAR tomography) derived from data stacks of X-band COSMO-SkyMed Spotlight and L band ALOS-1 PALSAR data stacks over Dujiangyan Dam, Sichuan, China and L and P band airborne SAR data (BioSAR 2008 - ESA) in the Krycklan river catchment, Northern Sweden. The new TanDEM-X 12m DEM is used to assist co - registration of all the data stacks over China first. Then, atmospheric correction is being assessed using weather model data such as ERA-I, MERRA, MERRA-2, WRF; linear phase-topography correction and MODIS spectrometer correction will be compared and ionospheric correction methods are discussed to remove tropospheric and ionospheric delay. Then the new TomoSAR method with the TanDEM-X 12m DEM is described to obtain the number of scatterers inside each pixel, the scattering amplitude and phase of each scatterer and finally extract tomograms (imaging), their 3D positions and motion parameters (deformation). A progress report will be shown on these different aspects.This work is partially supported by the CSC and UCL MAPS Dean prize through a PhD studentship at UCL-MSSL.

  4. Theory and experimental technique for nondestructive evaluation of ceramic composites

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1990-01-01

    The important ultrasonic scattering mechanisms for SiC and Si3N4 ceramic composites were identified by examining the interaction of ultrasound with individual fibers, pores, and grains. The dominant scattering mechanisms were identified as asymmetric refractive scattering due to porosity gradients in the matrix material, and symmetric diffractive scattering at the fiber-to-matrix interface and at individual pores. The effect of the ultrasonic reflection coefficient and surface roughness in the ultrasonic evaluation was highlighted. A new nonintrusive ultrasonic evaluation technique, angular power spectrum scanning (APSS), was presented that is sensitive to microstructural variations in composites. Preliminary results indicate that APSS will yield information on the composite microstructure that is not available by any other nondestructive technique.

  5. A Fourier-based total-field/scattered-field technique for three-dimensional broadband simulations of elastic targets near a water-sand interface.

    PubMed

    Shao, Yu; Wang, Shumin

    2016-12-01

    The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.

  6. An analytic formula for the relativistic incoherent Thomson backscattering spectrum for a drifting bi-Maxwellian plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naito, O.

    2015-08-15

    An analytic formula has been derived for the relativistic incoherent Thomson backscattering spectrum for a drifting anisotropic plasma when the scattering vector is parallel to the drifting direction. The shape of the scattering spectrum is insensitive to the electron temperature perpendicular to the scattering vector, but its amplitude may be modulated. As a result, while the measured temperature correctly represents the electron distribution parallel to the scattering vector, the electron density may be underestimated when the perpendicular temperature is higher than the parallel temperature. Since the scattering spectrum in shorter wavelengths is greatly enhanced by the existence of drift, themore » diagnostics might be used to measure local electron current density in fusion plasmas.« less

  7. Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.

    2017-12-01

    We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.

  8. Vertical spatial coherence model for a transient signal forward-scattered from the sea surface

    USGS Publications Warehouse

    Yoerger, E.J.; McDaniel, S.T.

    1996-01-01

    The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.

  9. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  10. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  11. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  12. Proposal and verification numerical simulation for a microwave forward scattering technique at upper hybrid resonance for the measurement of electron gyroscale density fluctuations in the electron cyclotron frequency range in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Kawamori, E.; Igami, H.

    2017-11-01

    A diagnostic technique for detecting the wave numbers of electron density fluctuations at electron gyro-scales in an electron cyclotron frequency range is proposed, and the validity of the idea is checked by means of a particle-in-cell (PIC) numerical simulation. The technique is a modified version of the scattering technique invented by Novik et al. [Plasma Phys. Controlled Fusion 36, 357-381 (1994)] and Gusakov et al., [Plasma Phys. Controlled Fusion 41, 899-912 (1999)]. The novel method adopts forward scattering of injected extraordinary probe waves at the upper hybrid resonance layer instead of the backward-scattering adopted by the original method, enabling the measurement of the wave-numbers of the fine scale density fluctuations in the electron-cyclotron frequency band by means of phase measurement of the scattered waves. The verification numerical simulation with the PIC method shows that the technique has a potential to be applicable to the detection of electron gyro-scale fluctuations in laboratory plasmas if the upper-hybrid resonance layer is accessible to the probe wave. The technique is a suitable means to detect electron Bernstein waves excited via linear mode conversion from electromagnetic waves in torus plasma experiments. Through the numerical simulations, some problems that remain to be resolved are revealed, which include the influence of nonlinear processes such as the parametric decay instability of the probe wave in the scattering process, and so on.

  13. Reduction of Raman scattering and fluorescence from anvils in high pressure Raman scattering

    NASA Astrophysics Data System (ADS)

    Dierker, S. B.; Aronson, M. C.

    2018-05-01

    We describe a new design and use of a high pressure anvil cell that significantly reduces the Raman scattering and fluorescence from the anvils in high pressure Raman scattering experiments. The approach is particularly useful in Raman scattering studies of opaque, weakly scattering samples. The effectiveness of the technique is illustrated with measurements of two-magnon Raman scattering in La2CuO4.

  14. Application of electrically invisible antennas to the Modulated Scatterer Technique

    DOE PAGES

    Crocker, Dylan A.; Donnell, Kristen M.

    2015-09-16

    The modulated scatterer technique (MST) has shown promise for applications in microwave imaging, electric field mapping, and materials characterization. Traditionally, MST scatterers are dipoles centrally loaded with an element capable of modulation (e.g., a p-i-n diode). By modulating the load element, signals scattered from the MST scatterer are also modulated. However, due to the small size of such scatterers, it can be difficult to reliably detect the modulated signal. Increasing the modulation depth (MD; a parameter related to how well the scatterer modulates the scattered signal) may improve the detectability of the scattered signal. In an effort to improve themore » MD, the concept of electrically invisible antennas is applied to the design of MST scatterers. Our paper presents simulations and measurements of MST scatterers that have been designed to be electrically invisible during the reverse bias state of the modulated element (a p-i-n diode in this case), while producing detectable scattering during the forward bias state (i.e., operate in an electrically visible state). Furthermore, the results using the new design show significant improvement to the MD of the scattered signal as compared with a traditional MST scatterer (i.e., dipole centrally loaded with a p-i-n diode).« less

  15. Relativistic Few-Body Hadronic Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polyzou, Wayne

    2016-06-20

    The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computationsmore » push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In addition to computing bound state properties and scattering cross section, we also computed electron scattering cross sections in few-nucleon and few-quark systems, which are sensitive to the electric currents in these systems. We produced the definitive review on article on relativistic quantum mechanics, which and been used by many groups. In addition we developed and tested many computational techniques are used by other groups. Many of these techniques have applications in other areas of physics. The research benefited by collaborations with physicists from many different institutions and countries. It also involved working with seventeen undergraduate and graduate students.« less

  16. Heavy-quark production in gluon fusion at two loops in QCD

    NASA Astrophysics Data System (ADS)

    Czakon, M.; Mitov, A.; Moch, S.

    2008-07-01

    We present the two-loop virtual QCD corrections to the production of heavy quarks in gluon fusion. The results are exact in the limit when all kinematical invariants are large compared to the mass of the heavy quark up to terms suppressed by powers of the heavy-quark mass. Our derivation uses a simple relation between massless and massive QCD scattering amplitudes as well as a direct calculation of the massive amplitude at two loops. The results presented here together with those obtained previously for quark-quark scattering form important parts of the next-to-next-to-leading order QCD corrections to heavy-quark production in hadron-hadron collisions.

  17. Comment on the modified Beer-Lambert law for scattering media.

    PubMed

    Sassaroli, Angelo; Fantini, Sergio

    2004-07-21

    We present a concise overview of the modified Beer-Lambert law, which has been extensively used in the literature of near-infrared spectroscopy (NIRS) of scattering media. In particular, we discuss one form of the modified Beer-Lambert law that is commonly found in the literature and that is not strictly correct. However, this incorrect form of the modified Beer-Lambert law still leads to the correct expression for the changes in the continuous wave optical signal associated with changes in the absorption coefficient of the investigated medium. Here we propose a notation for the modified Beer-Lambert law that keeps the typical form commonly found in the literature without introducing any incorrect assumptions.

  18. First Order Statistics of Speckle around a Scatterer Volume Density Edge and Edge Detection in Ultrasound Images.

    NASA Astrophysics Data System (ADS)

    Li, Yue

    1990-01-01

    Ultrasonic imaging plays an important role in medical imaging. But the images exhibit a granular structure, commonly known as speckle. The speckle tends to mask the presence of low-contrast lesions and reduces the ability of a human observer to resolve fine details. Our interest in this research is to examine the problem of edge detection and come up with methods for improving the visualization of organ boundaries and tissue inhomogeneity edges. An edge in an image can be formed either by acoustic impedance change or by scatterer volume density change (or both). The echo produced from these two kinds of edges has different properties. In this work, it has been proved that the echo from a scatterer volume density edge is the Hilbert transform of the echo from a rough impedance boundary (except for a constant) under certain conditions. This result can be used for choosing the correct signal to transmit to optimize the performance of edge detectors and characterizing an edge. The signal to noise ratio of the echo produced by a scatterer volume density edge is also obtained. It is found that: (1) By transmitting a signal with high bandwidth ratio and low center frequency, one can obtain a higher signal to noise ratio. (2) For large area edges, the farther the transducer is from the edge, the larger is the signal to noise ratio. But for small area edges, the nearer the transducer is to the edge, the larger is the signal to noise ratio. These results enable us to maximize the signal to noise ratio by adjusting these parameters. (3) The signal to noise ratio is not only related to the ratio of scatterer volume densities at the edge, but also related to the absolute value of scatterer volume densities. Some of these results have been proved through simulation and experiment. Different edge detection methods have been used to detect simulated scatterer volume density edges to compare their performance. A so-called interlaced array method has been developed for speckle reduction in the images formed by synthetic aperture focussing technique, and experiments have been done to evaluate its performance.

  19. Solution of electromagnetic scattering problems using time domain techniques

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.

    1989-01-01

    New methods are developed to calculate the electromagnetic diffraction or scattering characteristics of objects of arbitrary material and shape. The methods extend the efforts of previous researchers in the use of finite-difference and pulse response techniques. Examples are given of the scattering from infinite conducting and nonconducting cylinders, open channel, sphere, cone, cone sphere, coated disk, open boxes, and open and closed finite cylinders with axially incident waves.

  20. Radiance and polarization of multiple scattered light from haze and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-08-01

    The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.

Top