Science.gov

Sample records for accurate attenuation correction

  1. Attenuation correction for small animal PET tomographs

    NASA Astrophysics Data System (ADS)

    Chow, Patrick L.; Rannou, Fernando R.; Chatziioannou, Arion F.

    2005-04-01

    Attenuation correction is one of the important corrections required for quantitative positron emission tomography (PET). This work will compare the quantitative accuracy of attenuation correction using a simple global scale factor with traditional transmission-based methods acquired either with a small animal PET or a small animal x-ray computed tomography (CT) scanner. Two phantoms (one mouse-sized and one rat-sized) and two animal subjects (one mouse and one rat) were scanned in CTI Concorde Microsystem's microPET® Focus™ for emission and transmission data and in ImTek's MicroCAT™ II for transmission data. PET emission image values were calibrated against a scintillation well counter. Results indicate that the scale factor method of attenuation correction places the average measured activity concentration about the expected value, without correcting for the cupping artefact from attenuation. Noise analysis in the phantom studies with the PET-based method shows that noise in the transmission data increases the noise in the corrected emission data. The CT-based method was accurate and delivered low-noise images suitable for both PET data correction and PET tracer localization.

  2. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  3. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  4. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  5. Adjustments to the correction for attenuation.

    PubMed

    Wetcher-Hendricks, Debra

    2006-06-01

    With respect to the often-present covariance between error terms of correlated variables, D. W. Zimmerman and R. H. Williams's (1977) adjusted correction for attenuation estimates the strength of the pairwise correlation between true scores without assuming independence of error scores. This article focuses on the derivation and analysis of formulas that perform the same function for partial and part correlation coefficients. Values produced by these formulas lie closer to the actual true-score coefficient than do the observed-score coefficients or those obtained by using C. Spearman's (1904) correction for attenuation. The new versions of the formulas thus allow analysts to use hypothetical values for error-score correlations to estimate values for the partial and part correlations between true scores while disregarding the independence-of-errors assumption.

  6. [Usefulness of attenuation correction with transmission source in myocardial SPECT].

    PubMed

    Murakawa, Keizo; Katafuchi, Tetsuro; Nishimura, Yoshihiro; Enomoto, Naoyuki; Sago, Masayoshi; Oka, Hisashi

    2006-01-20

    Attenuation correction in SPECT has been used for uniformly absorptive objects like the head. On the other hand, it has seldom been applied to nonuniform absorptive objects like the heart and surrounding lungs because of the difficulty and inaccuracy of data processing. However, since attenuation correction using a transmission source recently became practical, we were able to apply this method to a nonuniform absorptive object. Therefore, we evaluated the usefulness of this attenuation correction system with a transmission source in myocardial SPECT. The dose linearity, defect/normal ratio using a myocardial phantom, and myocardial count distribution in clinical cases was examined with and without the attenuation correction system. We found that all data processed with attenuation correction were better than those without attenuation correction. For example, in myocardial count distribution, while there was a difference between men and women without attenuation correction, which was considered to be caused by differences in body shape, after processing with attenuation correction, myocardial count distribution was almost the same in all cases. In conclusion, these results suggested that attenuation correction with a transmission source was useful in myocardial SPECT.

  7. An MRI-based attenuation correction method for combined PET/MRI applications

    NASA Astrophysics Data System (ADS)

    Fei, Baowei; Yang, Xiaofeng; Wang, Hesheng

    2009-02-01

    We are developing MRI-based attenuation correction methods for PET images. PET has high sensitivity but relatively low resolution and little anatomic details. MRI can provide excellent anatomical structures with high resolution and high soft tissue contrast. MRI can be used to delineate tumor boundaries and to provide an anatomic reference for PET, thereby improving quantitation of PET data. Combined PET/MRI can offer metabolic, functional and anatomic information and thus can provide a powerful tool to study the mechanism of a variety of diseases. Accurate attenuation correction represents an essential component for the reconstruction of artifact-free, quantitative PET images. Unfortunately, the present design of hybrid PET/MRI does not offer measured attenuation correction using a transmission scan. This problem may be solved by deriving attenuation maps from corresponding anatomic MR images. Our approach combines image registration, classification, and attenuation correction in a single scheme. MR images and the preliminary reconstruction of PET data are first registered using our automatic registration method. MRI images are then classified into different tissue types using our multiscale fuzzy C-mean classification method. The voxels of classified tissue types are assigned theoretical tissue-dependent attenuation coefficients to generate attenuation correction factors. Corrected PET emission data are then reconstructed using a threedimensional filtered back projection method and an order subset expectation maximization method. Results from simulated images and phantom data demonstrated that our attenuation correction method can improve PET data quantitation and it can be particularly useful for combined PET/MRI applications.

  8. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    SciTech Connect

    Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan

    2015-09-15

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  9. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  10. Attenuation correction for freely moving small animal brain PET studies based on a virtual scanner geometry

    NASA Astrophysics Data System (ADS)

    Angelis, G. I.; Kyme, A. Z.; Ryder, W. J.; Fulton, R. R.; Meikle, S. R.

    2014-10-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies.

  11. GPU-based 3D SAFT reconstruction including attenuation correction

    NASA Astrophysics Data System (ADS)

    Kretzek, E.; Hopp, T.; Ruiter, N. V.

    2015-03-01

    3D Ultrasound Computer Tomography (3D USCT) promises reproducible high-resolution images for early detection of breast tumors. The KIT prototype provides three different modalities: reflectivity, speed of sound, and attenuation. The reflectivity images are reconstructed using a Synthetic Aperture Focusing Technique (SAFT) algorithm. For high-resolution re ectivity images, with spatially homogeneous reflectivity, attenuation correction is necessary. In this paper we present a GPU accelerated attenuation correction for 3D USCT and evaluate the method by means of image quality metrics; i.e. absolute error, contrast and spatially homogeneous reflectivity. A threshold for attenuation correction was introduced to preserve a high contrast. Simulated and in-vivo data were used for analysis of the image quality. Attenuation correction increases the image quality by improving spatially homogeneous reflectivity by 25 %. This leads to a factor 2.8 higher contrast for in-vivo data.

  12. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia

    NASA Astrophysics Data System (ADS)

    Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  13. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia.

    PubMed

    Gillen, Rebecca; Firbank, Michael J; Lloyd, Jim; O'Brien, John T

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer's Disease (n = 38), Dementia with Lewy Bodies (n = 29) or healthy normal controls (n = 30), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject's CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used.We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  14. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.

    2015-06-01

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  15. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging.

    PubMed

    Eldib, Mootaz; Bini, Jason; Robson, Philip M; Calcagno, Claudia; Faul, David D; Tsoumpas, Charalampos; Fayad, Zahi A

    2015-06-21

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  16. Impact of MR based attenuation correction on neurological PET studies

    PubMed Central

    Su, Yi; Rubin, Brian B.; McConathy, Jonathan; Laforest, Richard; Qi, Jing; Sharma, Akash; Priatna, Agus; Benzinger, Tammie L.S.

    2016-01-01

    Hybrid positron emission tomography (PET) and magnetic resonance (MR) scanners have become a reality in recent years with the benefits of reduced radiation exposure, reduction of imaging time, and potential advantages in quantification. Appropriate attenuation correction remains a challenge. Biases in PET activity measurements were demonstrated using the current MR based attenuation correction technique. We aim to investigate the impact of using standard MRAC technique on the clinical and research utility of PET/MR hybrid scanner for amyloid imaging. Methods Florbetapir scans were obtained on 40 participants on a Biograph mMR hybrid scanner with simultaneous MR acquisition. PET images were reconstructed using both MR and CT derived attenuation map. Quantitative analysis was performed for both datasets to assess the impact of MR based attenuation correction to absolute PET activity measurements as well as target to reference ratio (SUVR). Clinical assessment was also performed by a nuclear medicine physician to determine amyloid status based on the criteria in the FDA prescribing information for florbetapir. Results MR based attenuation correction led to underestimation of PET activity for most part of the brain with a small overestimation for deep brain regions. There is also an overestimation of SUVR values with cerebellar reference. SUVR measurements obtained from the two attenuation correction methods were strongly correlated. Clinical assessment of amyloid status resulted in identical classification as positive or negative regardless of the attenuation correction methods. Conclusions MR based attenuation correction cause biases in quantitative measurements. The biases may be accounted for by a linear model, although the spatial variation cannot be easily modelled. The quantitative differences however did not affect clinical assessment as positive or negative. PMID:26823562

  17. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  18. Is non-attenuation-corrected PET inferior to body attenuation-corrected PET or PET/CT in lung cancer?

    NASA Astrophysics Data System (ADS)

    Maintas, Dimitris; Houzard, Claire; Ksyar, Rachid; Mognetti, Thomas; Maintas, Catherine; Scheiber, Christian; Itti, Roland

    2006-12-01

    It is considered that one of the great strengths of PET imaging is the ability to correct for body attenuation. This enables better lesion uptake quantification and quality of PET images. The aim of this work is to compare the sensitivity of non-attenuation-corrected (NAC) PET images, the gamma photons (GPAC) and CT attenuation-corrected (CTAC) images in detecting and staging of lung cancer. We have studied 66 patients undergoing PET/CT examinations for detecting and staging NSC lung cancer. The patients were injected with 18-FDG; 5 MBq/kg under fasting conditions and examination was started 60 min later. Transmission data were acquired by a spiral CT X-ray tube and by gamma photons emitting Cs-137l source and were used for the patient body attenuation correction without correction for respiratory motion. In 55 of 66 patients we performed both attenuation correction procedures and in 11 patients only CT attenuation correction. In seven patients with solitary nodules PET was negative and in 59 patients with lung cancer PET/CT was positive for pulmonary or other localization. In the group of 55 patients we found 165 areas of focal increased 18-FDG uptake in NAC, 165 in CTAC and 164 in GPAC PET images.In the patients with only CTAC we found 58 areas of increased 18-FDG uptake on NAC and 58 areas lesions on CTAC. In the patients with positive PET we found 223 areas of focal increased uptake in NAC and 223 areas in CTAC images. The sensitivity of NAC was equal to the sensitivity of CTAC and GPAC images. The visualization of peripheral lesions was better in NAC images and the lesions were better localized in attenuation-corrected images. In three lesions of the thorax the localization was better in GPAC and fused images than in CTAC images.

  19. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    ) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  20. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  1. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  2. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  3. Attenuation correction for the large non-human primate brain imaging using microPET.

    PubMed

    Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R

    2010-04-21

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  4. Attenuation correction for the large non-human primate brain imaging using microPET

    NASA Astrophysics Data System (ADS)

    Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.

    2010-04-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  5. Variational attenuation correction in two-view confocal microscopy

    PubMed Central

    2013-01-01

    Background Absorption and refraction induced signal attenuation can seriously hinder the extraction of quantitative information from confocal microscopic data. This signal attenuation can be estimated and corrected by algorithms that use physical image formation models. Especially in thick heterogeneous samples, current single view based models are unable to solve the underdetermined problem of estimating the attenuation-free intensities. Results We present a variational approach to estimate both, the real intensities and the spatially variant attenuation from two views of the same sample from opposite sides. Assuming noise-free measurements throughout the whole volume and pure absorption, this would in theory allow a perfect reconstruction without further assumptions. To cope with real world data, our approach respects photon noise, estimates apparent bleaching between the two recordings, and constrains the attenuation field to be smooth and sparse to avoid spurious attenuation estimates in regions lacking valid measurements. Conclusions We quantify the reconstruction quality on simulated data and compare it to the state-of-the art two-view approach and commonly used one-factor-per-slice approaches like the exponential decay model. Additionally we show its real-world applicability on model organisms from zoology (zebrafish) and botany (Arabidopsis). The results from these experiments show that the proposed approach improves the quantification of confocal microscopic data of thick specimen. PMID:24350574

  6. Improved attenuation correction for freely moving animal brain PET studies using a virtual scanner geometry

    NASA Astrophysics Data System (ADS)

    Angelis, Georgios I.; Ryder, William J.; Kyme, Andre Z.; Fulton, Roger R.; Meikle, Steven R.

    2014-03-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals can be very challenging since the body of the animal is often within the field of view and introduces a non negligible atten- uating factor that can degrade the quantitative accuracy of the reconstructed images. An attractive approach that avoids the need for a transmission scan involves the generation of the convex hull of the animal's head based on the reconstructed emission images. However, this approach ignores the potential attenuation introduced by the animal's body. In this work, we propose a virtual scanner geometry, which moves in synchrony with the animal's head and discriminates between those events that traverse only the animal's head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal's body. For each pose a new virtual scanner geometry was defined and therefore a new system matrix was calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made rat phantom. Results showed that when the animal's body is within the FOV and not accounted for during attenuation correction it can lead to bias of up to 10%. On the contrary, at- tenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias <2%), without the need to account for the animal's body.

  7. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial

  8. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    NASA Astrophysics Data System (ADS)

    Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.

    2008-10-01

    revealed highly inhomogeneous vasculature perfusion within the tumour. Optical-ECT emission images yielded high-resolution 3D images of the fluorescent protein distribution in the tumour. Attenuation-uncorrected optical-ECT images showed clear loss of signal in regions of high attenuation, including regions of high perfusion, where attenuation is increased by increased vascular ink stain. Application of attenuation correction showed significant changes in an apparent expression of fluorescent proteins, confirming the importance of the attenuation correction. In conclusion, this work presents the first development and application of an attenuation correction for optical-ECT imaging. The results suggest that successful attenuation correction for optical-ECT is feasible and is essential for quantitatively accurate optical-ECT imaging.

  9. Cardiac function assessed by attenuation-corrected radionuclide pressure-volume indices

    SciTech Connect

    Maurer, A.H.; Siegel, J.A.; Blasius, K.M.; Deneberg, B.S.; Spann, J.F.; Malmud, L.S.

    1985-07-01

    Using attenuation-corrected radionuclide volumes and arm-cuff peak systolic pressures, the authors established the mean value for the ratio of left ventricular (LV) peak systolic pressure/end systolic volume at rest for 15 healthy persons. In 43 patients with coronary disease, this ratio was more sensitive as an indicator of abnormal LV function and for predicting coronary artery disease than the resting ejection fraction. The slope of an end systolic pressure-volume line was also calculated from data obtained under three loading conditions: at rest, during isometric handgrip testing, and after the sublingual administration of nitroglycerin. These results represent an improvement over previous radionuclide pressure-volume measurements that have not used attenuation correction and show the need for accurate, nongeometric measurements of the LV end systolic volume.

  10. Supplemental transmission method for improved PET attenuation correction on an integrated MR/PET

    NASA Astrophysics Data System (ADS)

    Watson, Charles C.

    2014-01-01

    Although MR image segmentation, combined with information from the PET emission data, has achieved a clinically usable PET attenuation correction (AC) on whole-body MR/PET systems, more accurate PET AC remains one of the main instrumental challenges for quantitative imaging. Incorporating a full conventional PET transmission system in these machines would be difficult, but even a small amount of transmission data might usefully complement the MR-based estimate of the PET attenuation image. In this paper we explore one possible configuration for such a system that uses a small number of fixed line sources placed around the periphery of the patient tunnel. These line sources are implemented using targeted positron beams. The sparse transmission (sTX) data are collected simultaneously with the emission (EM) acquisition. These data, plus a blank scan, are combined with a partially known attenuation image estimate in a modified version of the maximum likelihood for attenuation and activity (MLAA) algorithm, to estimate values of the linear attenuation coefficients (LAC) in unknown regions of the image. This algorithm was tested in two simple phantom experiments. We find that the use of supplemental transmission data can significantly improve the accuracy of the estimated LAC in a truncated region, as well as the estimate of the emitter concentration within the phantom. In the experiments, the bias in the EM+sTX estimate of emitter concentrations was 3-5%, compared to 15-20% with the use of EM-only data.

  11. MR Imaging-Guided Attenuation Correction of PET Data in PET/MR Imaging.

    PubMed

    Izquierdo-Garcia, David; Catana, Ciprian

    2016-04-01

    Attenuation correction (AC) is one of the most important challenges in the recently introduced combined PET/magnetic resonance (MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients of the tissues and other components located in the PET field of view. MR-AC methods can be divided into 3 categories: segmentation, atlas, and PET based. This review provides a comprehensive list of the state-of-the-art MR-AC approaches and their pros and cons. The main sources of artifacts are presented. Finally, this review discusses the current status of MR-AC approaches for clinical applications. PMID:26952727

  12. Field of view extension and truncation correction for MR-based human attenuation correction in simultaneous MR/PET imaging

    SciTech Connect

    Blumhagen, Jan O. Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus

    2014-02-15

    Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B{sub 0}) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B{sub 0} inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might

  13. Continuous MR bone density measurement using water- and fat-suppressed projection imaging (WASPI) for PET attenuation correction in PET-MR.

    PubMed

    Huang, C; Ouyang, J; Reese, T G; Wu, Y; El Fakhri, G; Ackerman, J L

    2015-10-21

    Due to the lack of signal from solid bone in normal MR sequences for the purpose of MR-based attenuation correction, investigators have proposed using the ultrashort echo time (UTE) pulse sequence, which yields signal from bone. However, the UTE-based segmentation approach might not fully capture the intra- and inter-subject bone density variation, which will inevitably lead to bias in reconstructed PET images. In this work, we investigated using the water- and fat-suppressed proton projection imaging (WASPI) sequence to obtain accurate and continuous attenuation for bones. This approach is capable of accounting for intra- and inter-subject bone attenuation variations. Using data acquired from a phantom, we have found that that attenuation correction based on the WASPI sequence is more accurate and precise when compared to either conventional MR attenuation correction or UTE-based segmentation approaches.

  14. Continuous MR bone density measurement using water- and fat-suppressed projection imaging (WASPI) for PET attenuation correction in PET-MR

    NASA Astrophysics Data System (ADS)

    Huang, C.; Ouyang, J.; Reese, T. G.; Wu, Y.; El Fakhri, G.; Ackerman, J. L.

    2015-10-01

    Due to the lack of signal from solid bone in normal MR sequences for the purpose of MR-based attenuation correction, investigators have proposed using the ultrashort echo time (UTE) pulse sequence, which yields signal from bone. However, the UTE-based segmentation approach might not fully capture the intra- and inter-subject bone density variation, which will inevitably lead to bias in reconstructed PET images. In this work, we investigated using the water- and fat-suppressed proton projection imaging (WASPI) sequence to obtain accurate and continuous attenuation for bones. This approach is capable of accounting for intra- and inter-subject bone attenuation variations. Using data acquired from a phantom, we have found that that attenuation correction based on the WASPI sequence is more accurate and precise when compared to either conventional MR attenuation correction or UTE-based segmentation approaches.

  15. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  16. Development of attenuation and diffraction corrections for linear and nonlinear Rayleigh surface waves radiating from a uniform line source

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Zhang, Shuzeng; Cho, Sungjong; Li, Xiongbing

    2016-04-01

    In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave is defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.

  17. Attenuation correction without transmission scan for the MAMMI breast PET

    NASA Astrophysics Data System (ADS)

    Soriano, A.; González, A.; Orero, A.; Moliner, L.; Carles, M.; Sánchez, F.; Benlloch, J. M.; Correcher, C.; Carrilero, V.; Seimetz, M.

    2011-08-01

    Whole-body Positron Emission Tomography (PET) scanners are required in order to span large Fields of View (FOV). Therefore, reaching the sensitivity and spatial resolution required for early stage breast tumor detection is not straightforward. MAMMI is a dedicated breast PET scanner with a ring geometry designed to provide PET images with a spatial resolution as high as 1.5 mm, being able to detect small breast tumors (<1cm). The patient lays down in prone position during the scan, thus making possible to image the whole breast, up to regions close to the base of the pectoral without the requirement of breast compression.Attenuation correction (AC) for PET data improves the image quality and the quantitative accuracy of radioactivity distribution determination. In dedicated, high resolution breast cancer scanners, this correction would enhance the proper diagnosis in early disease stages. In whole-body PET scanners, AC is usually taken into account with the use of transmission scans, either by external radioactive rod sources or by Computed Tomography (CT). This considerably increases the radiation dose administered to the patient and time needed for the exploration. In this work we propose a method for breast shape identification by means of PET image segmentation. The breast shape identification will be used for the determination of the AC. For the case of a specific breast PET scanner the procedure we propose should provide AC similar to that obtained by transmission scans as we take advantage of the breast anatomical simplicity. Experimental validation of the proposed approach with a dedicated breast PET prototype is also presented. The main advantage of this method is an important dose reduction since the transmission scan is not required.

  18. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    NASA Astrophysics Data System (ADS)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  19. Use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    SciTech Connect

    Parker, J.L.

    1984-08-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition. 17 references, 18 figures, 2 tables.

  20. The use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    SciTech Connect

    Parker, J.L.

    1986-11-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition.

  1. Generation of attenuation map for MR-based attenuation correction of PET data in the head area employing 3D short echo time MR imaging

    NASA Astrophysics Data System (ADS)

    Khateri, Parisa; Salighe Rad, Hamidreza; Fathi, Anahita; Ay, Mohammad Reza

    2013-02-01

    Attenuation correction is a crucial step to get accurate quantification of Positron Emission Tomography (PET) data. An attenuation map to provide attenuation coefficients at 511 keV can be generated using Magnetic Resonance Images (MRI). One of the main steps involved in MR-based attenuation correction (MRAC) of PET data is to separate bone from air. Low signal intensity of bone in conventional MRI makes it difficult to separate bone from air in the head area, while their attenuation coefficients are very different. In literature, several groups proposed ultrashort echo-time (UTE) sequences to differentiate bone from air [4,5,7], because these sequences are capable of imaging tissues with short T2* relaxation time, such as cortical bone; however, they are difficult to use, expensive and time-consuming. Employing short echo-time (STE) MRI in combination with long echo-time (LTE) MRI, and along with high performance image processing algorithms is a good substitute for UTE-based PET attenuation correction; they are widely available, easy to use, inexpensive and much faster than UTE pulse sequences. In this work, we propose the use of STE sequences along with LTE ones, as well as a dedicated image processing method to differentiate bone from air cavities in the head area by creating contrast between the tissues. Attenuation coefficients at 511 kev, relying on literature [5], will then be assigned to the voxels. Acquisition was performed on a clinical 3T Tim Trio scanner (Siemens Medical Solution, Erlangen, Germany), employing a dual echo sequence. To achieve an optimized protocol with the best result for discrimination of bone and air, two types of acquisitions were performed, with and without fat suppression; the acquisition parameters were as follows: TE=1.21/5 ms, TR=5/17, FA=30, and TE=1.12/3.16 ms, TR=5/5, FA=12 for non-fat-suppressed and fat-suppressed protocol, respectively. Contrast enhancement and tissue segmentation were applied as processing steps, to

  2. Metal artifact reduction strategies for improved attenuation correction in hybrid PET/CT imaging

    SciTech Connect

    Abdoli, Mehrsima; Dierckx, Rudi A. J. O.; Zaidi, Habib

    2012-06-15

    Metallic implants are known to generate bright and dark streaking artifacts in x-ray computed tomography (CT) images, which in turn propagate to corresponding functional positron emission tomography (PET) images during the CT-based attenuation correction procedure commonly used on hybrid clinical PET/CT scanners. Therefore, visual artifacts and overestimation and/or underestimation of the tracer uptake in regions adjacent to metallic implants are likely to occur and as such, inaccurate quantification of the tracer uptake and potential erroneous clinical interpretation of PET images is expected. Accurate quantification of PET data requires metal artifact reduction (MAR) of the CT images prior to the application of the CT-based attenuation correction procedure. In this review, the origins of metallic artifacts and their impact on clinical PET/CT imaging are discussed. Moreover, a brief overview of proposed MAR methods and their advantages and drawbacks is presented. Although most of the presented MAR methods are mainly developed for diagnostic CT imaging, their potential application in PET/CT imaging is highlighted. The challenges associated with comparative evaluation of these methods in a clinical environment in the absence of a gold standard are also discussed.

  3. Metal artifact reduction strategies for improved attenuation correction in hybrid PET/CT imaging.

    PubMed

    Abdoli, Mehrsima; Dierckx, Rudi A J O; Zaidi, Habib

    2012-06-01

    Metallic implants are known to generate bright and dark streaking artifacts in x-ray computed tomography (CT) images, which in turn propagate to corresponding functional positron emission tomography (PET) images during the CT-based attenuation correction procedure commonly used on hybrid clinical PET/CT scanners. Therefore, visual artifacts and overestimation and/or underestimation of the tracer uptake in regions adjacent to metallic implants are likely to occur and as such, inaccurate quantification of the tracer uptake and potential erroneous clinical interpretation of PET images is expected. Accurate quantification of PET data requires metal artifact reduction (MAR) of the CT images prior to the application of the CT-based attenuation correction procedure. In this review, the origins of metallic artifacts and their impact on clinical PET/CT imaging are discussed. Moreover, a brief overview of proposed MAR methods and their advantages and drawbacks is presented. Although most of the presented MAR methods are mainly developed for diagnostic CT imaging, their potential application in PET/CT imaging is highlighted. The challenges associated with comparative evaluation of these methods in a clinical environment in the absence of a gold standard are also discussed.

  4. Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR

    PubMed Central

    Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A.; Alpert, Nathaniel; Fakhri, Georges El

    2014-01-01

    This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs. PMID:24966415

  5. Attenuation-corrected fluorescence extraction for image-guided surgery in spatial frequency domain

    PubMed Central

    Yang, Bin; Sharma, Manu

    2013-01-01

    Abstract. A new approach to retrieve the attenuation-corrected fluorescence using spatial frequency-domain imaging is demonstrated. Both in vitro and ex vivo experiments showed the technique can compensate for the fluorescence attenuation from tissue absorption and scattering. This approach has potential in molecular image-guided surgery. PMID:23955392

  6. Attenuation-corrected fluorescence extraction for image-guided surgery in spatial frequency domain.

    PubMed

    Yang, Bin; Sharma, Manu; Tunnell, James W

    2013-08-01

    A new approach to retrieve the attenuation-corrected fluorescence using spatial frequency-domain imaging is demonstrated. Both in vitro and ex vivo experiments showed the technique can compensate for the fluorescence attenuation from tissue absorption and scattering. This approach has potential in molecular image-guided surgery. PMID:23955392

  7. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  8. Investigation of Attenuation Correction for Small-Animal Single Photon Emission Computed Tomography

    PubMed Central

    Lee, Hsin-Hui; Chen, Jyh-Cheng

    2013-01-01

    The quantitative accuracy of SPECT is limited by photon attenuation and scatter effect when photons interact with atoms. In this study, we developed a new attenuation correction (AC) method, CT-based mean attenuation correction (CTMAC) method, and compared it with various methods that were often used currently to assess the AC phenomenon by using the small-animal SPECT/CT data that were acquired from various physical phantoms and a rat. The physical phantoms and an SD rat, which were injected with 99mTc, were scanned by a parallel-hole small-animal SPECT, and then they were imaged by the 80 kVp micro-CT. Scatter was estimated and corrected by the triple-energy window (TEW) method. Absolute quantification was derived from a known activity point source scan. In the physical-phantom studies, we compared the images with original, scatter correction (SC) only, and the scatter-corrected images with AC performed by using Chang's method, CT-based attenuation correction (CTAC), CT-based iterative attenuation compensation during reconstruction (CTIACR), and the CTMAC. From the correction results, we find out that the errors of the previous six configurations are mostly quite similar. The CTMAC needs the shortest correction time while obtaining good AC results. PMID:23840278

  9. Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-07-01

    Quantitative whole-body PET/MR imaging is challenged by the lack of accurate and robust strategies for attenuation correction. In this work, a new pseudo-CT generation approach, referred to as sorted atlas pseudo-CT (SAP), is proposed for accurate extraction of bones and estimation of lung attenuation properties. This approach improves the Gaussian process regression (GPR) kernel proposed by Hofmann et al. which relies on the information provided by a co-registered atlas (CT and MRI) using a GPR kernel to predict the distribution of attenuation coefficients. Our approach uses two separate GPR kernels for lung and non-lung tissues. For non-lung tissues, the co-registered atlas dataset was sorted on the basis of local normalized cross-correlation similarity to the target MR image to select the most similar image in the atlas for each voxel. For lung tissue, the lung volume was incorporated in the GPR kernel taking advantage of the correlation between lung volume and corresponding attenuation properties to predict the attenuation coefficients of the lung. In the presence of pathological tissues in the lungs, the lesions are segmented on PET images corrected for attenuation using MRI-derived three-class attenuation map followed by assignment of soft-tissue attenuation coefficient. The proposed algorithm was compared to other techniques reported in the literature including Hofmann's approach and the three-class attenuation correction technique implemented on the Philips Ingenuity TF PET/MR where CT-based attenuation correction served as reference. Fourteen patients with head and neck cancer undergoing PET/CT and PET/MR examinations were used for quantitative analysis. SUV measurements were performed on 12 normal uptake regions as well as high uptake malignant regions. Moreover, a number of similarity measures were used to evaluate the accuracy of extracted bones. The Dice similarity metric revealed that the extracted bone improved from 0.58 ± 0.09 to 0.65 ± 0.07 when

  10. Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-07-01

    Quantitative whole-body PET/MR imaging is challenged by the lack of accurate and robust strategies for attenuation correction. In this work, a new pseudo-CT generation approach, referred to as sorted atlas pseudo-CT (SAP), is proposed for accurate extraction of bones and estimation of lung attenuation properties. This approach improves the Gaussian process regression (GPR) kernel proposed by Hofmann et al. which relies on the information provided by a co-registered atlas (CT and MRI) using a GPR kernel to predict the distribution of attenuation coefficients. Our approach uses two separate GPR kernels for lung and non-lung tissues. For non-lung tissues, the co-registered atlas dataset was sorted on the basis of local normalized cross-correlation similarity to the target MR image to select the most similar image in the atlas for each voxel. For lung tissue, the lung volume was incorporated in the GPR kernel taking advantage of the correlation between lung volume and corresponding attenuation properties to predict the attenuation coefficients of the lung. In the presence of pathological tissues in the lungs, the lesions are segmented on PET images corrected for attenuation using MRI-derived three-class attenuation map followed by assignment of soft-tissue attenuation coefficient. The proposed algorithm was compared to other techniques reported in the literature including Hofmann's approach and the three-class attenuation correction technique implemented on the Philips Ingenuity TF PET/MR where CT-based attenuation correction served as reference. Fourteen patients with head and neck cancer undergoing PET/CT and PET/MR examinations were used for quantitative analysis. SUV measurements were performed on 12 normal uptake regions as well as high uptake malignant regions. Moreover, a number of similarity measures were used to evaluate the accuracy of extracted bones. The Dice similarity metric revealed that the extracted bone improved from 0.58 ± 0.09 to 0.65 ± 0.07 when

  11. Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA

    NASA Astrophysics Data System (ADS)

    Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.

    2015-03-01

    In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.

  12. Accurate and efficient modeling of global seismic wave propagation for an attenuative Earth model including the center

    NASA Astrophysics Data System (ADS)

    Toyokuni, Genti; Takenaka, Hiroshi

    2012-06-01

    We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic

  13. Polarimetric X-band weather radar measurements in the tropics: radome and rain attenuation correction

    NASA Astrophysics Data System (ADS)

    Schneebeli, M.; Sakuragi, J.; Biscaro, T.; Angelis, C. F.; Carvalho da Costa, I.; Morales, C.; Baldini, L.; Machado, L. A. T.

    2012-09-01

    A polarimetric X-band radar has been deployed during one month (April 2011) for a field campaign in Fortaleza, Brazil, together with three additional laser disdrometers. The disdrometers are capable of measuring the raindrop size distributions (DSDs), hence making it possible to forward-model theoretical polarimetric X-band radar observables at the point where the instruments are located. This set-up allows to thoroughly test the accuracy of the X-band radar measurements as well as the algorithms that are used to correct the radar data for radome and rain attenuation. For the campaign in Fortaleza it was found that radome attenuation dominantly affects the measurements. With an algorithm that is based on the self-consistency of the polarimetric observables, the radome induced reflectivity offset was estimated. Offset corrected measurements were then further corrected for rain attenuation with two different schemes. The performance of the post-processing steps was analyzed by comparing the data with disdrometer-inferred polarimetric variables that were measured at a distance of 20 km from the radar. Radome attenuation reached values up to 14 dB which was found to be consistent with an empirical radome attenuation vs. rain intensity relation that was previously developed for the same radar type. In contrast to previous work, our results suggest that radome attenuation should be estimated individually for every view direction of the radar in order to obtain homogenous reflectivity fields.

  14. Limits of Ultra-Low Dose CT Attenuation Correction for PET/CT.

    PubMed

    Xia, Ting; Alessio, Adam M; Kinahan, Paul E

    2010-01-29

    We present an analysis of the effects of ultra-low dose X-ray computerized tomography (CT) based attenuation correction for positron emission tomography (PET). By ultra low dose we mean less than approximately 5 mAs or 0.5 mSv total effective whole body dose. The motivation is the increased interest in using respiratory motion information acquired during the CT scan for both phase-matched CT-based attenuation correction and for motion estimation. Since longer duration CT scans are desired, radiation dose to the patient can be a limiting factor. In this study we evaluate the impact of reducing photon flux rates in the CT data on the reconstructed PET image by using the CATSIM simulation tool for the CT component and the ASIM simulation tool for the PET component. The CT simulation includes effects of the x-ray tube spectra, beam conditioning, bowtie filter, detector noise, and bean hardening correction. The PET simulation includes the effect of attenuation and photon counting. Noise and bias in the PET image were evaluated from multiple realizations of test objects. We show that techniques can be used to significantly reduce the mAs needed for CT based attenuation correction if the CT is not used for diagnostic purposes. The limiting factor, however, is not the noise in the CT image but rather the bias introduced by CT sinogram elements with no detected flux. These results constrain the methods that can be used to lower CT dose in a manner suitable for attenuation correction of PET data. We conclude that ultra-low-dose CT for attenuation correction of PET data is feasible with current PET/CT scanners.

  15. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  16. Hybrid Positron Emission Tomography/Magnetic Resonance Imaging: Challenges, Methods, and State of the Art of Hardware Component Attenuation Correction.

    PubMed

    Paulus, Daniel H; Quick, Harald H

    2016-10-01

    Attenuation correction (AC) is an essential step in the positron emission tomography (PET) data reconstruction process to provide accurate and quantitative PET images. The introduction of PET/magnetic resonance (MR) hybrid systems has raised new challenges but also possibilities regarding PET AC. While in PET/computed tomography (CT) imaging, CT images can be converted to attenuation maps, MR images in PET/MR do not provide a direct relation to attenuation. For the AC of patient tissues, new methods have been suggested, for example, based on image segmentation, atlas registration, or ultrashort echo time MR sequences. Another challenge in PET/MR hybrid imaging is AC of hardware components that are placed in the PET/MR field of view, such as the patient table or various radiofrequency (RF) coils covering the body of the patient for MR signal detection. Hardware components can be categorized into 4 different groups: (1) patient table, (2) RF receiver coils, (3) radiation therapy equipment, and (4) PET and MR imaging phantoms. For rigid and stationary objects, such as the patient table and some RF coils like the head/neck coil, predefined CT-based attenuation maps stored on the system can be used for automatic AC. Flexible RF coils are not included into the AC process till now because they can vary in position as well as in shape and are not accurately detectable with the PET/MR system.This work summarizes challenges, established methods, new concepts, and the state of art in hardware component AC in the context of PET/MR hybrid imaging. The work also gives an overview of PET/MR hardware devices, their attenuation properties, and their effect on PET quantification. PMID:27175550

  17. Hybrid Positron Emission Tomography/Magnetic Resonance Imaging: Challenges, Methods, and State of the Art of Hardware Component Attenuation Correction.

    PubMed

    Paulus, Daniel H; Quick, Harald H

    2016-10-01

    Attenuation correction (AC) is an essential step in the positron emission tomography (PET) data reconstruction process to provide accurate and quantitative PET images. The introduction of PET/magnetic resonance (MR) hybrid systems has raised new challenges but also possibilities regarding PET AC. While in PET/computed tomography (CT) imaging, CT images can be converted to attenuation maps, MR images in PET/MR do not provide a direct relation to attenuation. For the AC of patient tissues, new methods have been suggested, for example, based on image segmentation, atlas registration, or ultrashort echo time MR sequences. Another challenge in PET/MR hybrid imaging is AC of hardware components that are placed in the PET/MR field of view, such as the patient table or various radiofrequency (RF) coils covering the body of the patient for MR signal detection. Hardware components can be categorized into 4 different groups: (1) patient table, (2) RF receiver coils, (3) radiation therapy equipment, and (4) PET and MR imaging phantoms. For rigid and stationary objects, such as the patient table and some RF coils like the head/neck coil, predefined CT-based attenuation maps stored on the system can be used for automatic AC. Flexible RF coils are not included into the AC process till now because they can vary in position as well as in shape and are not accurately detectable with the PET/MR system.This work summarizes challenges, established methods, new concepts, and the state of art in hardware component AC in the context of PET/MR hybrid imaging. The work also gives an overview of PET/MR hardware devices, their attenuation properties, and their effect on PET quantification.

  18. Accuracy of CT-Based Attenuation Correction in PET/CT Bone Imaging

    PubMed Central

    Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.

    2012-01-01

    We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well-tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9±0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers range from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important. PMID:22481547

  19. Accuracy of CT-based attenuation correction in PET/CT bone imaging.

    PubMed

    Abella, Monica; Alessio, Adam M; Mankoff, David A; MacDonald, Lawrence R; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E

    2012-05-01

    We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a (68)Ga/(68)Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.

  20. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  1. Methods of Attenuation Correction for Dual-Wavelength and Dual-Polarization Weather Radar Data

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.

    2007-01-01

    In writing the integral equations for the median mass diameter and number concentration, or comparable parameters of the raindrop size distribution, it is apparent that the forms of the equations for dual-polarization and dual-wavelength radar data are identical when attenuation effects are included. The differential backscattering and extinction coefficients appear in both sets of equations: for the dual-polarization equations, the differences are taken with respect to polarization at a fixed frequency while for the dual-wavelength equations, the differences are taken with respect to frequency at a fixed polarization. An alternative to the integral equation formulation is that based on the k-Z (attenuation coefficient-radar reflectivity factor) parameterization. This-technique was originally developed for attenuating single-wavelength radars, a variation of which has been applied to the TRMM Precipitation Radar data (PR). Extensions of this method have also been applied to dual-polarization data. In fact, it is not difficult to show that nearly identical equations are applicable as well to dualwavelength radar data. In this case, the equations for median mass diameter and number concentration take the form of coupled, but non-integral equations. Differences between this and the integral equation formulation are a consequence of the different ways in which attenuation correction is performed under the two formulations. For both techniques, the equations can be solved either forward from the radar outward or backward from the final range gate toward the radar. Although the forward-going solutions tend to be unstable as the attenuation out to the range of interest becomes large in some sense, an independent estimate of path attenuation is not required. This is analogous to the case of an attenuating single-wavelength radar where the forward solution to the Hitschfeld-Bordan equation becomes unstable as the attenuation increases. To circumvent this problem, the

  2. Attenuation correction with region growing method used in the positron emission mammography imaging system

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  3. [Evaluation of non-uniform attenuation correction using simultaneous transmission and emission computed tomography--basic analysis with myocardial phantom].

    PubMed

    Otake, H; Yukihiro, M; Fukushima, Y; Imai, T; Hosono, K; Hatori, N; Watanabe, N; Hirano, T; Inoue, T; Takahashi, M; Ban, R; Endo, K

    1996-03-01

    Simultaneous transmission emission protocol (STEP), developed for the non-uniform attenuation correction of single photon emission computed tomography (SPECT) was evaluated using the cardiac phantom prepared with and without a myocardial wall defect. Emission computed tomography (ECT) of the cardiac phantom using 201Tl was acquired. Transmission data (TCT) were taken using a line source of 99mTc. Myocardial images with STEP method were superior in the homogeneity of intramyocardial radioactivity and spatial resolution to the conventional SPECT images. This is an excellent method because of the accurate matching position between TCT and ECT images and shortening the examination time by simultaneous data acquisition. It would be clinically useful for diagnosing various myocardial diseases.

  4. Validation of Computed Tomography-based Attenuation Correction of Deviation between Theoretical and Actual Values in Four Computed Tomography Scanners

    PubMed Central

    Yada, Nobuhiro; Onishi, Hideo

    2016-01-01

    Objective(s): In this study, we aimed to validate the accuracy of computed tomography-based attenuation correction (CTAC), using the bilinear scaling method. Methods: The measured attenuation coefficient (μm) was compared to the theoretical attenuation coefficient (μt), using four different CT scanners and an RMI 467 phantom. The effective energy of CT beam X-rays was calculated, using the aluminum half-value layer method and was used in conjunction with an attenuation coefficient map to convert the CT numbers to μm values for the photon energy of 140 keV. We measured the CT number of RMI 467 phantom for each of the four scanners and compared the μm and μt values for the effective energies of CT beam X-rays, effective atomic numbers, and physical densities. Results: The μm values for CT beam X-rays with low effective energies decreased in high construction elements, compared with CT beam X-rays of high effective energies. As the physical density increased, the μm values elevated linearly. Compared with other scanners, the μm values obtained from the scanner with CT beam X-rays of maximal effective energy increased once the effective atomic number exceeded 10.00. The μm value of soft tissue was equivalent to the μt value. However, the ratios of maximal difference between μm and μt values were 25.4% (lung tissue) and 21.5% (bone tissue), respectively. Additionally, the maximal difference in μm values was 6.0% in the bone tissue for each scanner. Conclusion: The bilinear scaling method could accurately convert CT numbers to μ values in soft tissues. PMID:27408896

  5. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  6. Feasibility of using respiration-averaged MR images for attenuation correction of cardiac PET/MR imaging.

    PubMed

    Ai, Hua; Pan, Tinsu

    2015-01-01

    Cardiac imaging is a promising application for combined PET/MR imaging. However, current MR imaging protocols for whole-body attenuation correction can produce spatial mismatch between PET and MR-derived attenuation data owing to a disparity between the two modalities' imaging speeds. We assessed the feasibility of using a respiration-averaged MR (AMR) method for attenuation correction of cardiac PET data in PET/MR images. First, to demonstrate the feasibility of motion imaging with MR, we used a 3T MR system and a two-dimensional fast spoiled gradient-recalled echo (SPGR) sequence to obtain AMR images ofa moving phantom. Then, we used the same sequence to obtain AMR images of a patient's thorax under free-breathing conditions. MR images were converted into PET attenuation maps using a three-class tissue segmentation method with two sets of predetermined CT numbers, one calculated from the patient-specific (PS) CT images and the other from a reference group (RG) containing 54 patient CT datasets. The MR-derived attenuation images were then used for attenuation correction of the cardiac PET data, which were compared to the PET data corrected with average CT (ACT) images. In the myocardium, the voxel-by-voxel differences and the differences in mean slice activity between the AMR-corrected PET data and the ACT-corrected PET data were found to be small (less than 7%). The use of AMR-derived attenuation images in place of ACT images for attenuation correction did not affect the summed stress score. These results demonstrate the feasibility of using the proposed SPGR-based MR imaging protocol to obtain patient AMR images and using those images for cardiac PET attenuation correction. Additional studies with more clinical data are warranted to further evaluate the method. PMID:26218995

  7. Clinical evaluation of the computed tomography attenuation correction map for myocardial perfusion imaging: the potential for incidental pathology detection.

    PubMed

    Tootell, Andrew; Vinjamuri, Sobhan; Elias, Mark; Hogg, Peter

    2012-11-01

    The benefits of hybrid imaging in nuclear medicine have been proven to increase the diagnostic accuracy and sensitivity of many procedures by localizing or characterizing lesions or by correcting emission data to more accurately represent radiopharmaceutical distribution. Single-photon emission computed tomography/computed tomography (SPECT/CT) has a significant role in the diagnosis and follow-up of ischaemic heart disease with attenuation correction data being obtained on an integrated CT scanner. Initially, the CT component of hybrid SPECT/CT systems was what could be described as low specification utilizing fixed output parameters. As technology has progressed, the CT component of newer systems has specifications that are identical to that of stand-alone diagnostic systems. Irrespective of the type of scanner used, the computed tomography attenuation correction (CTAC) for myocardial perfusion imaging produces low-quality, limited-range CT images of the chest that include the mediastinum, lung fields and surrounding soft tissues. The diagnostic potential of this data set is unclear; yet, examples exist whereby significant pathology can be identified and investigated further. Despite guidance from a number of professional bodies suggesting that evaluation of the resulting images for every medical exposure be carried out, there is no indication as to whether this should include the evaluation of CTAC images. This review aims to initiate discussion by examining the ethical, legal, financial and practical issues (e.g. CT specification and image quality) surrounding the clinical evaluation of the CTAC for myocardial perfusion imaging images. Reference to discussions that have taken place, and continue to take place, in other modalities, current European and UK legislations, and guidelines and research in the field will be made.

  8. Attenuated MP2 with a Long-Range Dispersion Correction for Treating Nonbonded Interactions.

    PubMed

    Goldey, Matthew B; Belzunces, Bastien; Head-Gordon, Martin

    2015-09-01

    Attenuated second order Møller-Plesset theory (MP2) captures intermolecular binding energies at equilibrium geometries with high fidelity with respect to reference methods, yet must fail to reproduce dispersion energies at stretched geometries due to the removal of fully long-range dispersion. For this problem to be ameliorated, long-range correction using the VV10 van der Waals density functional is added to attenuated MP2, capturing short-range correlation with attenuated MP2 and long-range dispersion with VV10. Attenuated MP2 with long-range VV10 dispersion in the aug-cc-pVTZ (aTZ) basis set, MP2-V(terfc, aTZ), is parametrized for noncovalent interactions using the S66 database and tested on a variety of noncovalent databases, describing potential energy surfaces and equilibrium binding energies equally well. Further, a spin-component scaled (SCS) version, SCS-MP2-V(2terfc, aTZ), is produced using the W4-11 database as a supplemental thermochemistry training set, and the resulting method reproduces the quality of MP2-V(terfc, aTZ) for noncovalent interactions and exceeds the performance of SCS-MP2/aTZ for thermochemistry. PMID:26575911

  9. What is the benefit of CT-based attenuation correction in myocardial perfusion SPET?

    PubMed

    Apostolopoulos, Dimitrios J; Savvopoulos, Christos

    2016-01-01

    In multimodality imaging, CT-derived transmission maps are used for attenuation correction (AC) of SPET or PET data. Regarding SPET myocardial perfusion imaging (MPI), however, the bene����t of CT-based AC (CT-AC) has been questioned. Although most attenuation-related artifacts are removed by this technique, new false defects may appear while some true perfusion abnormalities may be masked. The merits and the drawbacks of CT-AC in MPI SPET are reviewed and discussed in this editorial. In conclusion, CT-AC is most helpful in men, overweight in particular, and in those with low or low to intermediate pre-test probability of coronary artery disease (CAD). It is also useful for the evaluation of myocardial viability. In high-risk patients though, CT-AC may underestimate the presence or the extent of CAD. In any case, corrected and non-corrected images should be viewed side-by-side and both considered in the interpretation of the study. PMID:27331200

  10. Filter Paper: Solution to High Self-Attenuation Corrections in HEPA Filter Measurements

    SciTech Connect

    Oberer, R.B.; Harold, N.B.; Gunn, C.A.; Brummett, M.; Chaing, L.G.

    2005-10-01

    An 8 by 8 by 6 inch High Efficiency Particulate Air (HEPA) filter was measured as part of a uranium holdup survey in June of 2005 as it has been routinely measured every two months since 1998. Although the survey relies on gross gamma count measurements, this was one of a few measurements that had been converted to a quantitative measurement in 1998. The measurement was analyzed using the traditional Generalized Geometry Holdup (GGH) approach, using HMS3 software, with an area calibration and self-attenuation corrected with an empirical correction factor of 1.06. A result of 172 grams of {sup 235}U was reported. The actual quantity of {sup 235}U in the filter was approximately 1700g. Because of this unusually large discrepancy, the measurement of HEPA filters will be discussed. Various techniques for measuring HEPA filters will be described using the measurement of a 24 by 24 by 12 inch HEPA filter as an example. A new method to correct for self attenuation will be proposed for this measurement Following the discussion of the 24 by 24 by 12 inch HEPA filter, the measurement of the 8 by 8 by 6 inch will be discussed in detail.

  11. Attenuation correction in emission tomography using the emission data—A review

    PubMed Central

    Li, Yusheng

    2016-01-01

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy

  12. Comparison of ordered subsets expectation maximization and Chang's attenuation correction method in quantitative cardiac SPET: a phantom study.

    PubMed

    Dey, D; Slomka, P J; Hahn, L J; Kloiber, R

    1998-12-01

    Photon attenuation is one of the primary causes of artifacts in cardiac single photon emission tomography (SPET). Several attenuation correction algorithms have been proposed. The aim of this study was to compare the effect of using the ordered subsets expectation maximization (OSEM) reconstruction algorithm and Chang's non-uniform attenuation correction method on quantitative cardiac SPET. We performed SPET scans of an anthropomorphic phantom simulating normal and abnormal myocardial studies. Attenuation maps of the phantom were obtained from computed tomographic images. The SPET projection data were corrected for attenuation using OSEM reconstruction, as well as Chang's method. For each defect scan and attenuation correction method, we calculated three quantitative parameters: average radial maximum (ARM) ratio of the defect-to-normal area, maximum defect contrast (MDC) and defect volume, using automated three-dimensional quantitation. The differences between the two methods were less than 4% for defect-to-normal ARM ratio, 19% for MDC and 13% for defect volume. These differences are within the range of estimated statistical variation of SPET. The calculation times of the two methods were comparable. For all SPET studies, OSEM attenuation correction gave a more correct activity distribution, with respect to both the homogeneity of the radiotracer and the shape of the cardiac insert. The difference in uniformity between OSEM and Chang's method was quantified by segmental analysis and found to be less than 8% for the normal study. In conclusion, OSEM and Chang's attenuation correction are quantitatively equivalent, with comparable calculation times. OSEM reconstruction gives a more correct activity distribution and is therefore preferred.

  13. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  14. Reference Value Provision Schemes for Attenuation Correction of Full-Waveform Airborne Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Richter, K.; Blaskow, R.; Stelling, N.; Maas, H.-G.

    2015-08-01

    The characterization of the vertical forest structure is highly relevant for ecological research and for better understanding forest ecosystems. Full-waveform airborne laser scanner systems providing a complete time-resolved digitization of every laser pulse echo may deliver very valuable information on the biophysical structure in forest stands. To exploit the great potential offered by full-waveform airborne laser scanning data, the development of suitable voxel based data analysis methods is straightforward. Beyond extracting additional 3D points, it is very promising to derive voxel attributes from the digitized waveform directly. However, the 'history' of each laser pulse echo is characterized by attenuation effects caused by reflections in higher regions of the crown. As a result, the received waveform signals within the canopy have a lower amplitude than it would be observed for an identical structure without the previous canopy structure interactions (Romanczyk et al., 2012). To achieve a radiometrically correct voxel space representation, the loss of signal strength caused by partial reflections on the path of a laser pulse through the canopy has to be compensated by applying suitable attenuation correction models. The basic idea of the correction procedure is to enhance the waveform intensity values in lower parts of the canopy for portions of the pulse intensity, which have been reflected in higher parts of the canopy. To estimate the enhancement factor an appropriate reference value has to be derived from the data itself. Based on pulse history correction schemes presented in previous publications, the paper will discuss several approaches for reference value estimation. Furthermore, the results of experiments with two different data sets (leaf-on/leaf-off) are presented.

  15. Correcting infrared satellite estimates of sea surface temperature for atmospheric water vapor attenuation

    NASA Technical Reports Server (NTRS)

    Emery, William J.; Yu, Yunyue; Wick, Gary A.; Schluessel, Peter; Reynolds, Richard W.

    1994-01-01

    A new satellite sea surface temperature (SST) algorithm is developed that uses nearly coincident measurements from the microwave special sensor microwave imager (SSM/I) to correct for atmospheric moisture attenuation of the infrared signal from the advanced very high resolution radiometer (AVHRR). This new SST algorithm is applied to AVHRR imagery from the South Pacific and Norwegian seas, which are then compared with simultaneous in situ (ship based) measurements of both skin and bulk SST. In addition, an SST algorithm using a quadratic product of the difference between the two AVHRR thermal infrared channels is compared with the in situ measurements. While the quadratic formulation provides a considerable improvement over the older cross product (CPSST) and multichannel (MCSST) algorithms, the SSM/I corrected SST (called the water vapor or WVSST) shows overall smaller errors when compared to both the skin and bulk in situ SST observations. Applied to individual AVHRR images, the WVSST reveals an SST difference pattern (CPSST-WVSST) similar in shape to the water vapor structure while the CPSST-quadratic SST difference appears unrelated in pattern to the nearly coincident water vapor pattern. An application of the WVSST to week-long composites of global area coverage (GAC) AVHRR data demonstrates again the manner in which the WVSST corrects the AVHRR for atmospheric moisture attenuation. By comparison the quadratic SST method underestimates the SST corrections in the lower latitudes and overestimates the SST in th e higher latitudes. Correlations between the AVHRR thermal channel differences and the SSM/I water vapor demonstrate the inability of the channel difference to represent water vapor in the midlatitude and high latitudes during summer. Compared against drifting buoy data the WVSST and the quadratic SST both exhibit the same general behavior with the relatively small differences with the buoy temperatures.

  16. Evaluation of the Effect of Attenuation Correction by External CT in a Semiconductor SPECT.

    PubMed

    Uchibe, Taku; Miyai, Masahiro; Yata, Nobuhiro; Haramoto, Masuo; Yamamoto, Yasushi; Nakamura, Megumi; Kitagaki, Hajime; Takahashi, Yasuyuki

    2016-07-01

    The discovery of NM530c with a cadmium-zinc-telluride detector (CdZnTe-SPECT) is superior to the conventional Anger-type SPECT with a sodium-iodide detector (NaI-SPECT) in terms of sensitivity and spatial resolution. However, in the clinical example, even in CdZnTe-SPECT, a count decrease in myocardium due to the attenuation of the gamma ray is an issue. This study was conducted to evaluate the effect of computed tomography attenuation correction (CTAC) in CdZnTe-SPECT with the help of external CT. We evaluated the revision effect of uniformity, influence by the difference in attenuation distance, contrast ratio, an uptake rate using the heart phantom. As a result of the phantom studies, a good revision effect was obtained. In the clinical study, there was a statistical significant difference between the contrast ratio before and after CTAC in the inferior wall. In addition, the contrast ratio before and after CTAC in CdZnTe-SPECT image was equal to those of NaI-SPECT image. It was suggested that CTAC using external CT in CdZnTe-SPECT was clinically useful for inferior wall. PMID:27440705

  17. The goal of forming accurate impressions during social interactions: attenuating the impact of negative expectancies.

    PubMed

    Neuberg, S L

    1989-03-01

    Investigated the idea that impression formation goals may regulate the impact that perceiver expectancies have on social interactions. In simulated interviews, interviewers Ss were given a negative expectancy about one applicant S and no expectancy about another. Half the interviewers were encouraged to form accurate impressions; the others were not. As predicted, no-goal interviewers exhibited a postinteraction impression bias against the negative-expectancy applicants, whereas the accuracy-goal interviewers did not. Moreover, the ability of the accuracy goal to reduce this bias was apparently mediated by more extensive and less biased interviewer information-gathering, which in turn elicited an improvement in negative-expectancy applicants' performances. These findings stress the theoretical and practical importance of considering the motivational context within which expectancy-tinged social interactions occur.

  18. The goal of forming accurate impressions during social interactions: attenuating the impact of negative expectancies.

    PubMed

    Neuberg, S L

    1989-03-01

    Investigated the idea that impression formation goals may regulate the impact that perceiver expectancies have on social interactions. In simulated interviews, interviewers Ss were given a negative expectancy about one applicant S and no expectancy about another. Half the interviewers were encouraged to form accurate impressions; the others were not. As predicted, no-goal interviewers exhibited a postinteraction impression bias against the negative-expectancy applicants, whereas the accuracy-goal interviewers did not. Moreover, the ability of the accuracy goal to reduce this bias was apparently mediated by more extensive and less biased interviewer information-gathering, which in turn elicited an improvement in negative-expectancy applicants' performances. These findings stress the theoretical and practical importance of considering the motivational context within which expectancy-tinged social interactions occur. PMID:2926635

  19. Effect of Non-Alignment/Alignment of Attenuation Map Without/With Emission Motion Correction in Cardiac SPECT/CT

    PubMed Central

    Dey, Joyoni; Segars, W. Paul; Pretorius, P. Hendrik; King, Michael A.

    2015-01-01

    Purpose We investigate the differences without/with respiratory motion correction in apparent imaging agent localization induced in reconstructed emission images when the attenuation maps used for attenuation correction (from CT) are misaligned with the patient anatomy during emission imaging due to differences in respiratory state. Methods We investigated use of attenuation maps acquired at different states of a 2 cm amplitude respiratory cycle (at end-expiration, at end-inspiration, the center map, the average transmission map, and a large breath-hold beyond range of respiration during emission imaging) to correct for attenuation in MLEM reconstruction for several anatomical variants of the NCAT phantom which included both with and without non-rigid motion between heart and sub-diaphragmatic regions (such as liver, kidneys etc). We tested these cases with and without emission motion correction and attenuation map alignment/non-alignment. Results For the NCAT default male anatomy the false count-reduction due to breathing was largely removed upon emission motion correction for the large majority of the cases. Exceptions (for the default male) were for the cases when using the large-breathhold end-inspiration map (TI_EXT), when we used the end-expiration (TE) map, and to a smaller extent, the end-inspiration map (TI). However moving the attenuation maps rigidly to align the heart region, reduced the remaining count-reduction artifacts. For the female patient count-reduction remained post motion correction using rigid map-alignment due to the breast soft-tissue misalignment. Quantitatively, after the transmission (rigid) alignment correction, the polar-map 17-segment RMS error with respect to the reference (motion-less case) reduced by 46.5% on average for the extreme breathhold case. The reductions were 40.8% for end-expiration map and 31.9% for end-inspiration cases on the average, comparable to the semi-ideal case where each state uses its own attenuation map for

  20. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    Declared North Korean nuclear tests in 2006, 2009, 2013, and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-dimensional global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25% shorter than the distances between events estimated using regional Pn phases. The 2009, 2013, and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of meters. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio, and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-d velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The

  1. Accurate description of van der Waals complexes by density functional theory including empirical corrections.

    PubMed

    Grimme, Stefan

    2004-09-01

    An empirical method to account for van der Waals interactions in practical calculations with the density functional theory (termed DFT-D) is tested for a wide variety of molecular complexes. As in previous schemes, the dispersive energy is described by damped interatomic potentials of the form C6R(-6). The use of pure, gradient-corrected density functionals (BLYP and PBE), together with the resolution-of-the-identity (RI) approximation for the Coulomb operator, allows very efficient computations for large systems. Opposed to previous work, extended AO basis sets of polarized TZV or QZV quality are employed, which reduces the basis set superposition error to a negligible extend. By using a global scaling factor for the atomic C6 coefficients, the functional dependence of the results could be strongly reduced. The "double counting" of correlation effects for strongly bound complexes is found to be insignificant if steep damping functions are employed. The method is applied to a total of 29 complexes of atoms and small molecules (Ne, CH4, NH3, H2O, CH3F, N2, F2, formic acid, ethene, and ethine) with each other and with benzene, to benzene, naphthalene, pyrene, and coronene dimers, the naphthalene trimer, coronene. H2O and four H-bonded and stacked DNA base pairs (AT and GC). In almost all cases, very good agreement with reliable theoretical or experimental results for binding energies and intermolecular distances is obtained. For stacked aromatic systems and the important base pairs, the DFT-D-BLYP model seems to be even superior to standard MP2 treatments that systematically overbind. The good results obtained suggest the approach as a practical tool to describe the properties of many important van der Waals systems in chemistry. Furthermore, the DFT-D data may either be used to calibrate much simpler (e.g., force-field) potentials or the optimized structures can be used as input for more accurate ab initio calculations of the interaction energies.

  2. An analytical algorithm for skew-slit imaging geometry with nonuniform attenuation correction

    SciTech Connect

    Huang Qiu; Zeng, Gengsheng L.

    2006-04-15

    The pinhole collimator is currently the collimator of choice in small animal single photon emission computed tomography (SPECT) imaging because it can provide high spatial resolution and reasonable sensitivity when the animal is placed very close to the pinhole. It is well known that if the collimator rotates around the object (e.g., a small animal) in a circular orbit to form a cone-beam imaging geometry with a planar trajectory, the acquired data are not sufficient for an exact artifact-free image reconstruction. In this paper a novel skew-slit collimator is mounted instead of the pinhole collimator in order to significantly reduce the image artifacts caused by the geometry. The skew-slit imaging geometry is a more generalized version of the pinhole imaging geometry. The multiple pinhole geometry can also be extended to the multiple-skew-slit geometry. An analytical algorithm for image reconstruction based on the tilted fan-beam inversion is developed with nonuniform attenuation compensation. Numerical simulation shows that the axial artifacts are evidently suppressed in the skew-slit images compared to the pinhole images and the attenuation correction is effective.

  3. Evaluation and automatic correction of metal-implant-induced artifacts in MR-based attenuation correction in whole-body PET/MR imaging.

    PubMed

    Schramm, G; Maus, J; Hofheinz, F; Petr, J; Lougovski, A; Beuthien-Baumann, B; Platzek, I; van den Hoff, J

    2014-06-01

    The aim of this paper is to describe a new automatic method for compensation of metal-implant-induced segmentation errors in MR-based attenuation maps (MRMaps) and to evaluate the quantitative influence of those artifacts on the reconstructed PET activity concentration. The developed method uses a PET-based delineation of the patient contour to compensate metal-implant-caused signal voids in the MR scan that is segmented for PET attenuation correction. PET emission data of 13 patients with metal implants examined in a Philips Ingenuity PET/MR were reconstructed with the vendor-provided method for attenuation correction (MRMap(orig), PET(orig)) and additionally with a method for attenuation correction (MRMap(cor), PET(cor)) developed by our group. MRMaps produced by both methods were visually inspected for segmentation errors. The segmentation errors in MRMap(orig) were classified into four classes (L1 and L2 artifacts inside the lung and B1 and B2 artifacts inside the remaining body depending on the assigned attenuation coefficients). The average relative SUV differences (ε(rel)(av)) between PET(orig) and PET(cor) of all regions showing wrong attenuation coefficients in MRMap(orig) were calculated. Additionally, relative SUV(mean) differences (ε(rel)) of tracer accumulations in hot focal structures inside or in the vicinity of these regions were evaluated. MRMap(orig) showed erroneous attenuation coefficients inside the regions affected by metal artifacts and inside the patients' lung in all 13 cases. In MRMap(cor), all regions with metal artifacts, except for the sternum, were filled with the soft-tissue attenuation coefficient and the lung was correctly segmented in all patients. MRMap(cor) only showed small residual segmentation errors in eight patients. ε(rel)(av) (mean ± standard deviation) were: (-56 ± 3)% for B1, (-43 ± 4)% for B2, (21 ± 18)% for L1, (120 ± 47)% for L2 regions. ε(rel) (mean ± standard deviation) of hot focal structures were: (-52

  4. Method for transforming CT images for attenuation correction in PET/CT imaging

    SciTech Connect

    Carney, Jonathan P.J.; Townsend, David W.; Rappoport, Vitaliy; Bendriem, Bernard

    2006-04-15

    A tube-voltage-dependent scheme is presented for transforming Hounsfield units (HU) measured by different computed tomography (CT) scanners at different x-ray tube voltages (kVp) to 511 keV linear attenuation values for attenuation correction in positron emission tomography (PET) data reconstruction. A Gammex 467 electron density CT phantom was imaged using a Siemens Sensation 16-slice CT, a Siemens Emotion 6-slice CT, a GE Lightspeed 16-slice CT, a Hitachi CXR 4-slice CT, and a Toshiba Aquilion 16-slice CT at kVp ranging from 80 to 140 kVp. All of these CT scanners are also available in combination with a PET scanner as a PET/CT tomograph. HU obtained for various reference tissue substitutes in the phantom were compared with the known linear attenuation values at 511 keV. The transformation, appropriate for lung, soft tissue, and bone, yields the function 9.6x10{sup -5}{center_dot}(HU+1000) below a threshold of {approx}50 HU and a{center_dot}(HU+1000)+b above the threshold, where a and b are fixed parameters that depend on the kVp setting. The use of the kVp-dependent scaling procedure leads to a significant improvement in reconstructed PET activity levels in phantom measurements, resolving errors of almost 40% otherwise seen for the case of dense bone phantoms at 80 kVp. Results are also presented for patient studies involving multiple CT scans at different kVp settings, which should all lead to the same 511 keV linear attenuation values. A linear fit to values obtained from 140 kVp CT images using the kVp-dependent scaling plotted as a function of the corresponding values obtained from 80 kVp CT images yielded y=1.003x-0.001 with an R{sup 2} value of 0.999, indicating that the same values are obtained to a high degree of accuracy.

  5. Towards improved hardware component attenuation correction in PET/MR hybrid imaging.

    PubMed

    Paulus, D H; Tellmann, L; Quick, H H

    2013-11-21

    In positron emission tomography/computed tomography (PET/CT) hybrid imaging attenuation correction (AC) of the patient tissue and patient table is performed by converting the CT-based Hounsfield units (HU) to linear attenuation coefficients (LAC) of PET. When applied to the new field of hardware component AC in PET/magnetic resonance (MR) hybrid imaging, this conversion method may result in local overcorrection of PET activity values. The aim of this study thus was to optimize the conversion parameters for CT-based AC of hardware components in PET/MR. Systematic evaluation and optimization of the HU to LAC conversion parameters has been performed for the hardware component attenuation map (µ-map) of a flexible radiofrequency (RF) coil used in PET/MR imaging. Furthermore, spatial misregistration of this RF coil to its µ-map was simulated by shifting the µ-map in different directions and the effect on PET quantification was evaluated. Measurements of a PET NEMA standard emission phantom were performed on an integrated hybrid PET/MR system. Various CT parameters were used to calculate different µ-maps for the flexible RF coil and to evaluate the impact on the PET activity concentration. A 511 keV transmission scan of the local RF coil was used as standard of reference to adapt the slope of the conversion from HUs to LACs at 511 keV. The average underestimation of the PET activity concentration due to the non-attenuation corrected RF coil in place was calculated to be 5.0% in the overall phantom. When considering attenuation only in the upper volume of the phantom, the average difference to the reference scan without RF coil is 11.0%. When the PET/CT conversion is applied, an average overestimation of 3.1% (without extended CT scale) and 4.2% (with extended CT scale) is observed in the top volume of the NEMA phantom. Using the adapted conversion resulting from this study, the deviation in the top volume of the phantom is reduced to -0.5% and shows the lowest

  6. Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography

    USGS Publications Warehouse

    Liu, J.; Xia, J.; Chen, C.; Zhang, G.

    2005-01-01

    The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.

  7. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom.

    PubMed

    Saha, Krishnendu; Hoyt, Sean C; Murray, Bryon M

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  8. Band-Filling Correction Method for Accurate Adsorption Energy Calculations: A Cu/ZnO Case Study.

    PubMed

    Hellström, Matti; Spångberg, Daniel; Hermansson, Kersti; Broqvist, Peter

    2013-11-12

    We present a simple method, the "band-filling correction", to calculate accurate adsorption energies (Eads) in the low coverage limit from finite-size supercell slab calculations using DFT. We show that it is necessary to use such a correction if charge transfer takes place between the adsorbate and the substrate, resulting in the substrate bands either filling up or becoming depleted. With this correction scheme, we calculate Eads of an isolated Cu atom adsorbed on the ZnO(101̅0) surface. Without the correction, the calculated Eads is highly coverage-dependent, even for surface supercells that would typically be considered very large (in the range from 1 nm × 1 nm to 2.5 nm × 2.5 nm). The correction scheme works very well for semilocal functionals, where the corrected Eads is converged within 0.01 eV for all coverages. The correction scheme also works well for hybrid functionals if a large supercell is used and the exact exchange interaction is screened. PMID:26583386

  9. Improved UTE-based attenuation correction for cranial PET-MR using dynamic magnetic field monitoring

    SciTech Connect

    Aitken, A. P.; Giese, D.; Tsoumpas, C.; Schleyer, P.; Kozerke, S.; Prieto, C.; Schaeffter, T.

    2014-01-15

    Purpose: Ultrashort echo time (UTE) MRI has been proposed as a way to produce segmented attenuation maps for PET, as it provides contrast between bone, air, and soft tissue. However, UTE sequences require samples to be acquired during rapidly changing gradient fields, which makes the resulting images prone to eddy current artifacts. In this work it is demonstrated that this can lead to misclassification of tissues in segmented attenuation maps (AC maps) and that these effects can be corrected for by measuring the true k-space trajectories using a magnetic field camera. Methods: The k-space trajectories during a dual echo UTE sequence were measured using a dynamic magnetic field camera. UTE images were reconstructed using nominal trajectories and again using the measured trajectories. A numerical phantom was used to demonstrate the effect of reconstructing with incorrect trajectories. Images of an ovine leg phantom were reconstructed and segmented and the resulting attenuation maps were compared to a segmented map derived from a CT scan of the same phantom, using the Dice similarity measure. The feasibility of the proposed method was demonstrated inin vivo cranial imaging in five healthy volunteers. Simulated PET data were generated for one volunteer to show the impact of misclassifications on the PET reconstruction. Results: Images of the numerical phantom exhibited blurring and edge artifacts on the bone–tissue and air–tissue interfaces when nominal k-space trajectories were used, leading to misclassification of soft tissue as bone and misclassification of bone as air. Images of the tissue phantom and thein vivo cranial images exhibited the same artifacts. The artifacts were greatly reduced when the measured trajectories were used. For the tissue phantom, the Dice coefficient for bone in MR relative to CT was 0.616 using the nominal trajectories and 0.814 using the measured trajectories. The Dice coefficients for soft tissue were 0.933 and 0.934 for the

  10. Surface EMG measurements during fMRI at 3T: accurate EMG recordings after artifact correction.

    PubMed

    van Duinen, Hiske; Zijdewind, Inge; Hoogduin, Hans; Maurits, Natasha

    2005-08-01

    In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-scanning), we were able to compare the mean amplitude of the undisturbed EMG (non-scanning) intervals with the mean amplitude of the EMG intervals during scanning, after MRI artifact correction. The agreement between the mean amplitudes of the corrected and the undisturbed EMG was excellent and the mean difference between the two amplitudes was not significantly different. Furthermore, there was no significant difference between the corrected and undisturbed amplitude at different force levels. In conclusion, we have shown that it is feasible to record surface EMG during scanning and that, after MRI artifact correction, the EMG recordings can be used to quantify isometric muscle activity, even at very low activation intensities.

  11. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    SciTech Connect

    Nyflot, Matthew J.; Lee, Tzu-Cheng; Alessio, Adam M.; Kinahan, Paul E.; Wollenweber, Scott D.; Stearns, Charles W.; Bowen, Stephen R.

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  12. Ultra-low dose CT attenuation correction for PET/CT.

    PubMed

    Xia, Ting; Alessio, Adam M; De Man, Bruno; Manjeshwar, Ravindra; Asma, Evren; Kinahan, Paul E

    2012-01-21

    A challenge for positron emission tomography/computed tomography (PET/CT) quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently available, lowest dose CT techniques, extended duration cine CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging. PMID:22156174

  13. Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs

    NASA Astrophysics Data System (ADS)

    Išgum, Ivana; de Vos, Bob D.; Wolterink, Jelmer M.; Dey, Damini; Berman, Daniel S.; Rubeaux, Mathieu; Leiner, Tim; Slomka, Piotr J.

    2016-03-01

    CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.

  14. Ultra-low dose CT attenuation correction for PET/CT

    NASA Astrophysics Data System (ADS)

    Xia, Ting; Alessio, Adam M.; De Man, Bruno; Manjeshwar, Ravindra; Asma, Evren; Kinahan, Paul E.

    2012-01-01

    A challenge for positron emission tomography/computed tomography (PET/CT) quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently available, lowest dose CT techniques, extended duration cine CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging.

  15. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  16. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  17. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  18. An experimental correction proposed for an accurate determination of mass diffusivity of wood in steady regime

    NASA Astrophysics Data System (ADS)

    Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick

    2002-08-01

    This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.

  19. A proposal for PET/MRI attenuation correction with μ-values measured using a fixed-position radiation source and MRI segmentation

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hirano, Yoshiyuki; Yoshida, Eiji; Kershaw, Jeff; Shiraishi, Takahiro; Suga, Mikio; Ikoma, Yoko; Obata, Takayuki; Ito, Hiroshi; Yamaya, Taiga

    2014-01-01

    Several MRI-based attenuation correction methods have been reported for PET/MRI; these methods are expected to make efficient use of high-quality anatomical MRIs and reduce the radiation dose for PET/MRI scanning. The accuracy of the attenuation map (μ-map) from an MRI depends on the accuracy of tissue segmentation and the attenuation coefficients to be assigned (μ-values). In this study, we proposed an MRI-based μ-value estimation method with a non-rotational radiation source to construct a suitable μ-map for PET/MRI. The proposed method uses an accurately segmented tissue map, the partial path length of each tissue, and detected intensities of attenuated radiation from a fixed-position (rather than a rotating) radiation source to obtain the μ-map. We estimated the partial path length from a virtual blank scan of fixed-point radiation with the same scanner geometry using the known tissue map from MRI. The μ-values of every tissue were estimated by inverting a linear relationship involving the partial path lengths and measured radioactivity intensity. Validation of the proposed method was performed by calculating a fixed- point data set based upon real a real transmission scan. The root-mean-square error between the μ-values derived from a conventional transmission scan and those obtained with our proposed method were 2.4±1.4%, 17.4±9.1% and 6.6±4.3% for brain, bone and soft tissue other than brain, respectively. Although the error estimates for bone and soft tissue are not insignificant, the method we propose is able to estimate the brain μ-value accurately and it is this factor that most strongly affects the quantitative value of PET images because of the large volumetric ratio of the brain.

  20. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    SciTech Connect

    Brady, Samuel L.; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.

  1. Motion artifacts occurring at the lung/diaphragm interface using 4D CT attenuation correction of 4D PET scans.

    PubMed

    Killoran, Joseph H; Gerbaudo, Victor H; Mamede, Marcelo; Ionascu, Dan; Park, Sang-June; Berbeco, Ross

    2011-11-15

    For PET/CT, fast CT acquisition time can lead to errors in attenuation correction, particularly at the lung/diaphragm interface. Gated 4D PET can reduce motion artifacts, though residual artifacts may persist depending on the CT dataset used for attenuation correction. We performed phantom studies to evaluate 4D PET images of targets near a density interface using three different methods for attenuation correction: a single 3D CT (3D CTAC), an averaged 4D CT (CINE CTAC), and a fully phase matched 4D CT (4D CTAC). A phantom was designed with two density regions corresponding to diaphragm and lung. An 8 mL sphere phantom loaded with 18F-FDG was used to represent a lung tumor and background FDG included at an 8:1 ratio. Motion patterns of sin(x) and sin4(x) were used for dynamic studies. Image data was acquired using a GE Discovery DVCT-PET/CT scanner. Attenuation correction methods were compared based on normalized recovery coefficient (NRC), as well as a novel quantity "fixed activity volume" (FAV) introduced in our report. Image metrics were compared to those determined from a 3D PET scan with no motion present (3D STATIC). Values of FAV and NRC showed significant variation over the motion cycle when corrected by 3D CTAC images. 4D CTAC- and CINE CTAC-corrected PET images reduced these motion artifacts. The amount of artifact reduction is greater when the target is surrounded by lower density material and when motion was based on sin4(x). 4D CTAC reduced artifacts more than CINE CTAC for most scenarios. For a target surrounded by water equivalent material, there was no advantage to 4D CTAC over CINE CTAC when using the sin(x) motion pattern. Attenuation correction using both 4D CTAC or CINE CTAC can reduce motion artifacts in regions that include a tissue interface such as the lung/diaphragm border. 4D CTAC is more effective than CINE CTAC at reducing artifacts in some, but not all, scenarios.

  2. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  3. Attenuation correction of PET cardiac data with low-dose average CT in PET/CT

    SciTech Connect

    Pan Tinsu; Mawlawi, Osama; Luo, Dershan; Liu, Hui H.; Chi Paichun, M.; Mar, Martha V.; Gladish, Gregory; Truong, Mylene; Erasmus, Jeremy Jr.; Liao Zhongxing; Macapinlac, H. A.

    2006-10-15

    We proposed a low-dose average computer tomography (ACT) for attenuation correction (AC) of the PET cardiac data in PET/CT. The ACT was obtained from a cine CT scan of over one breath cycle per couch position while the patient was free breathing. We applied this technique on four patients who underwent tumor imaging with {sup 18}F-FDG in PET/CT, whose PET data showed high uptake of {sup 18}F-FDG in the heart and whose CT and PET data had misregistration. All four patients did not have known myocardiac infarction or ischemia. The patients were injected with 555-740 MBq of {sup 18}F-FDG and scanned 1 h after injection. The helical CT (HCT) data were acquired in 16 s for the coverage of 100 cm. The PET acquisition was 3 min per bed of 15 cm. The duration of cine CT acquisition per 2 cm was 5.9 s. We used a fast gantry rotation cycle time of 0.5 s to minimize motion induced reconstruction artifacts in the cine CT images, which were averaged to become the ACT images for AC of the PET data. The radiation dose was about 5 mGy for 5.9 s cine duration. The selection of 5.9 s was based on our analysis of the respiratory signals of 600 patients; 87% of the patients had average breath cycles of less than 6 s and 90% had standard deviations of less than 1 s in the period of breath cycle. In all four patient studies, registrations between the CT and the PET data were improved. An increase of average uptake in the anterior and the lateral walls up to 48% and a decrease of average uptake in the septal and the inferior walls up to 16% with ACT were observed. We also compared ACT and conventional slow scan CT (SSCT) of 4 s duration in one patient study and found ACT was better than SSCT in depicting average respiratory motion and the SSCT images showed motion-induced reconstruction artifacts. In conclusion, low-dose ACT improved registration of the CT and the PET data in the heart region in our study of four patients. ACT was superior than SSCT for depicting average respiration

  4. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  5. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  6. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  7. Application of the variational method for correction of wet ice attenuation for X-band dual-polarized radar

    NASA Astrophysics Data System (ADS)

    Tolstoy, Leonid

    In recent years there has been a huge interest in the development and use of dual-polarized radar systems operating at X-band (˜10 GHz) region of the electromagnetic spectrum. This is due to the fact that these systems are smaller and cheaper allowing for a network to be built, for example, for short range (typically < 30--40 km) hydrological applications. Such networks allow for higher cross-beam spatial resolutions while cheaper pedestals supporting a smaller antenna also allows for higher temporal resolution as compared with large S-band (long range) systems used by the National Weather Service. Dual-polarization radar techniques allow for correction of the strong attenuation of the electromagnetic radar signal due to rain at X-band and higher frequencies. However, practical attempts to develop reliable correction algorithms have been cumbered by the need to deal with the rather large statistical fluctuations or "noise" in the measured polarization parameters. Recently, the variational method was proposed, which overcomes this problem by using the forward model for polarization variables, and uses iterative approach to minimize the difference between modeled and observed values, in a least squares sense. This approach also allows for detection of hail and determination of the fraction of reflectivity due to the hail when the precipitation shaft is composed of a mixture of rain and hail. It was shown that this approach works well with S-band radar data. The purpose of this research is to extend the application of the variational method to the X-band dual-polarization radar data. The main objective is to correct for attenuation caused by rain mixed with wet ice hydrometeors (e.g., hail) in deep convection. The standard dual-polarization method of attenuation-correction using the differential propagation phase between H and V polarized waves cannot account for wet ice hydrometeors along the propagation path. The ultimate goal is to develop a feasible and robust

  8. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  9. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  10. Effects of CT-based attenuation correction of rat microSPECT images on relative myocardial perfusion and quantitative tracer uptake

    SciTech Connect

    Strydhorst, Jared H. Ruddy, Terrence D.; Wells, R. Glenn

    2015-04-15

    Purpose: Our goal in this work was to investigate the impact of CT-based attenuation correction on measurements of rat myocardial perfusion with {sup 99m}Tc and {sup 201}Tl single photon emission computed tomography (SPECT). Methods: Eight male Sprague-Dawley rats were injected with {sup 99m}Tc-tetrofosmin and scanned in a small animal pinhole SPECT/CT scanner. Scans were repeated weekly over a period of 5 weeks. Eight additional rats were injected with {sup 201}Tl and also scanned following a similar protocol. The images were reconstructed with and without attenuation correction, and the relative perfusion was analyzed with the commercial cardiac analysis software. The absolute uptake of {sup 99m}Tc in the heart was also quantified with and without attenuation correction. Results: For {sup 99m}Tc imaging, relative segmental perfusion changed by up to +2.1%/−1.8% as a result of attenuation correction. Relative changes of +3.6%/−1.0% were observed for the {sup 201}Tl images. Interscan and inter-rat reproducibilities of relative segmental perfusion were 2.7% and 3.9%, respectively, for the uncorrected {sup 99m}Tc scans, and 3.6% and 4.3%, respectively, for the {sup 201}Tl scans, and were not significantly affected by attenuation correction for either tracer. Attenuation correction also significantly increased the measured absolute uptake of tetrofosmin and significantly altered the relationship between the rat weight and tracer uptake. Conclusions: Our results show that attenuation correction has a small but statistically significant impact on the relative perfusion measurements in some segments of the heart and does not adversely affect reproducibility. Attenuation correction had a small but statistically significant impact on measured absolute tracer uptake.

  11. Correcting aethalometer black carbon data for measurement artifacts by using inter-comparison methodology based on two different light attenuation increasing rates

    NASA Astrophysics Data System (ADS)

    Cheng, Y.-H.; Yang, L.-S.

    2015-03-01

    In black carbon (BC) measurements obtained using the filter-based optical technique, artifacts are a major problem. Recently, it has become possible to correct these artifacts to a certain extent by using numerical methods. Nevertheless, all correction schemes have their advantages and disadvantages under field conditions. In this study, a new correction model that can be used for determining artifact effects on BC measurements was proposed; the model is based on two different light attenuation (ATN) increasing rates. Two aethalometers were used to measure ATN values in parallel at aerosol sampling flow rates of 6 and 2 L min-1. In the absence of sampling artifacts, the ratio of ATN values measured by the two aethalometers should be equal to the ratio of the sampling flow rates (or aerosol deposition rates) of these two aethalometers. In practice, the ratio of ATN values measured by the two aethalometers was not the same as the ratio of the sampling flow rates of the aethalometers because the aerosol loading effects varied with the aerosol deposition rate. If the true ATN value can be found, then BC measurements can be corrected for artifacts by using the true ATN change rate. Therefore, determining the true ATN value was the primary objective of this study. The proposed correction algorithm can be used to obtain the true ATN value from ATN values acquired at different sampling flow rates, and the actual BC mass concentrations can be determined from the true ATN change rate. Before BC correction, the BC concentration measured at the sampling flow rate of 6 L min-1 was smaller than that measured at 2 L min-1 by approximately 13 and 9% in summer and winter seasons, respectively. After BC correction by using the true ATN value, the corrected BC for 6 L min-1 can be exactly equal to the corrected BC for 2 L min-1. Field test results demonstrated that loading effects on BC measurements could be corrected accurately by using the proposed model. Additionally, the problem

  12. Influences of reconstruction and attenuation correction in brain SPECT images obtained by the hybrid SPECT/CT device: evaluation with a 3-dimensional brain phantom

    PubMed Central

    Akamatsu, Mana; Yamashita, Yasuo; Akamatsu, Go; Tsutsui, Yuji; Ohya, Nobuyoshi; Nakamura, Yasuhiko; Sasaki, Masayuki

    2014-01-01

    Objective(s): The aim of this study was to evaluate the influences of reconstruction and attenuation correction on the differences in the radioactivity distributions in 123I brain SPECT obtained by the hybrid SPECT/CT device. Methods: We used the 3-dimensional (3D) brain phantom, which imitates the precise structure of gray matter, white matter and bone regions. It was filled with 123I solution (20.1 kBq/mL) in the gray matter region and with K2HPO4 in the bone region. The SPECT/CT data were acquired by the hybrid SPECT/CT device. SPECT images were reconstructed by using filtered back projection with uniform attenuation correction (FBP-uAC), 3D ordered-subsets expectation-maximization with uniform AC (3D-OSEM-uAC) and 3D OSEM with CT-based non-uniform AC (3D-OSEM-CTAC). We evaluated the differences in the radioactivity distributions among these reconstruction methods using a 3D digital phantom, which was developed from CT images of the 3D brain phantom, as a reference. The normalized mean square error (NMSE) and regional radioactivity were calculated to evaluate the similarity of SPECT images to the 3D digital phantom. Results: The NMSE values were 0.0811 in FBP-uAC, 0.0914 in 3D-OSEM-uAC and 0.0766 in 3D-OSEM-CTAC. The regional radioactivity of FBP-uAC was 11.5% lower in the middle cerebral artery territory, and that of 3D-OSEM-uAC was 5.8% higher in the anterior cerebral artery territory, compared with the digital phantom. On the other hand, that of 3D-OSEM-CTAC was 1.8% lower in all brain areas. Conclusion: By using the hybrid SPECT/CT device, the brain SPECT reconstructed by 3D-OSEM with CT attenuation correction can provide an accurate assessment of the distribution of brain radioactivity. PMID:27408856

  13. Maximum-Likelihood Joint Image Reconstruction/Motion Estimation in Attenuation-Corrected Respiratory Gated PET/CT Using a Single Attenuation Map.

    PubMed

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris

    2016-01-01

    This work provides an insight into positron emission tomography (PET) joint image reconstruction/motion estimation (JRM) by maximization of the likelihood, where the probabilistic model accounts for warped attenuation. Our analysis shows that maximum-likelihood (ML) JRM returns the same reconstructed gates for any attenuation map (μ-map) that is a deformation of a given μ-map, regardless of its alignment with the PET gates. We derived a joint optimization algorithm accordingly, and applied it to simulated and patient gated PET data. We first evaluated the proposed algorithm on simulations of respiratory gated PET/CT data based on the XCAT phantom. Our results show that independently of which μ-map is used as input to JRM: (i) the warped μ-maps correspond to the gated μ-maps, (ii) JRM outperforms the traditional post-registration reconstruction and consolidation (PRRC) for hot lesion quantification and (iii) reconstructed gated PET images are similar to those obtained with gated μ-maps. This suggests that a breath-held μ-map can be used. We then applied JRM on patient data with a μ-map derived from a breath-held high resolution CT (HRCT), and compared the results with PRRC, where each reconstructed PET image was obtained with a corresponding cine-CT gated μ-map. Results show that JRM with breath-held HRCT achieves similar reconstruction to that using PRRC with cine-CT. This suggests a practical low-dose solution for implementation of motion-corrected respiratory gated PET/CT.

  14. Attenuation-Corrected vs. Nonattenuation-Corrected 2-Deoxy-2-[F-18]fluoro-d-glucose-Positron Emission Tomography in Oncology, A Systematic Review

    PubMed Central

    Joshi, Urvi; Riphagen, Ingrid I.; Teule, Gerrit J. J.; van Lingen, Arthur; Hoekstra, Otto S.

    2007-01-01

    Purpose To perform a systematic review and meta-analysis to determine the diagnostic accuracy of attenuation-corrected (AC) vs. nonattenuation-corrected (NAC) 2-deoxy-2-[F-18]fluoro-d-glucose-positron emission tomography (FDG-PET) in oncological patients. Procedures Following a comprehensive search of the literature, two reviewers independently assessed the methodological quality of eligible studies. The diagnostic value of AC was studied through its sensitivity/specificity compared to histology, and by comparing the relative lesion detection rate reported with NAC-PET vs. AC, for full-ring and dual-head coincidence PET (FR- and DH-PET, respectively). Results Twelve studies were included. For FR-PET, the pooled sensitivity/specificity on a patient basis was 64/97% for AC and 62/99% for NAC, respectively. Pooled lesion detection with NAC vs. AC was 98% [95% confidence interval (95% CI): 96–99%, n = 1,012 lesions] for FR-PET, and 88% (95% CI:81–94%, n = 288 lesions) for DH-PET. Conclusions Findings suggest similar sensitivity/specificity and lesion detection for NAC vs. AC FR-PET and significantly higher lesion detection for NAC vs. AC DH-PET. PMID:17318671

  15. Highly accurate stability-preserving optimization of the Zener viscoelastic model, with application to wave propagation in the presence of strong attenuation

    NASA Astrophysics Data System (ADS)

    Blanc, Émilie; Komatitsch, Dimitri; Chaljub, Emmanuel; Lombard, Bruno; Xie, Zhinan

    2016-04-01

    This paper concerns the numerical modelling of time-domain mechanical waves in viscoelastic media based on a generalized Zener model. To do so, classically in the literature relaxation mechanisms are introduced, resulting in a set of the so-called memory variables and thus in large computational arrays that need to be stored. A challenge is thus to accurately mimic a given attenuation law using a minimal set of relaxation mechanisms. For this purpose, we replace the classical linear approach of Emmerich & Korn with a nonlinear optimization approach with constraints of positivity. We show that this technique is more accurate than the linear approach. Moreover, it ensures that physically meaningful relaxation times that always honour the constraint of decay of total energy with time are obtained. As a result, these relaxation times can always be used in a stable way in a modelling algorithm, even in the case of very strong attenuation for which the classical linear approach may provide some negative and thus unusable coefficients.

  16. Evaluation of a bilinear model for attenuation correction using CT numbers generated from a parametric method.

    PubMed

    Martinez, L C; Calzado, A

    2016-01-01

    A parametric model is used for the calculation of the CT number of some selected human tissues of known compositions (Hi) in two hybrid systems, one SPECT-CT and one PET-CT. Only one well characterized substance, not necessarily tissue-like, needs to be scanned with the protocol of interest. The linear attenuation coefficients of these tissues for some energies of interest (μ(i)) have been calculated from their tabulated compositions and the NIST databases. These coefficients have been compared with those calculated with the bilinear model from the CT number (μ(B)i). No relevant differences have been found for bones and lung. In the soft tissue region, the differences can be up to 5%. These discrepancies are attributed to the different chemical composition for the tissues assumed by both methods.

  17. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction.

    PubMed

    Chun, Se Young

    2016-03-01

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855

  18. Accurate determination of the thickness or mass per unit area of thin foils and single-crystal wafers for x-ray attenuation measurements

    NASA Astrophysics Data System (ADS)

    Tran, C. Q.; Chantler, C. T.; Barnea, Z.; de Jonge, M. D.

    2004-09-01

    The determination of the local mass per unit area m/A=∫ρdt and the thickness of a specimen is an important aspect of its characterization and is often required for material quality control in fabrication. We discuss common methods which have been used to determine the local thickness of thin specimens. We then propose an x-ray technique which is capable of determining the local thickness and the x-ray absorption profile of a foil or wafer to high accuracy. This technique provides an accurate integration of the column density which is not affected by the presence of voids and internal defects in the material. The technique is best suited to specimens with thickness substantially greater than the dimensions of the surface and void structure. We also show that the attenuation of an x-ray beam by a nonuniform specimen is significantly different from that calculated by using a simple linear average of the mass per unit area and quantify this effect. For much thinner specimens or in the presence of a very structured surface profile we propose a complementary technique capable of attaining high accuracy by the use of a secondary standard. The technique is demonstrated by absolute measurements of the x-ray mass attenuation coefficient of copper and silver.

  19. Accurate determination of the thickness or mass per unit area of thin foils and single-crystal wafers for x-ray attenuation measurements

    SciTech Connect

    Tran, C.Q.; Chantler, C.T.; Barnea, Z.; Jonge, M.D. de

    2004-09-01

    The determination of the local mass per unit area m/A={integral}{rho}dt and the thickness of a specimen is an important aspect of its characterization and is often required for material quality control in fabrication. We discuss common methods which have been used to determine the local thickness of thin specimens. We then propose an x-ray technique which is capable of determining the local thickness and the x-ray absorption profile of a foil or wafer to high accuracy. This technique provides an accurate integration of the column density which is not affected by the presence of voids and internal defects in the material. The technique is best suited to specimens with thickness substantially greater than the dimensions of the surface and void structure. We also show that the attenuation of an x-ray beam by a nonuniform specimen is significantly different from that calculated by using a simple linear average of the mass per unit area and quantify this effect. For much thinner specimens or in the presence of a very structured surface profile we propose a complementary technique capable of attaining high accuracy by the use of a secondary standard. The technique is demonstrated by absolute measurements of the x-ray mass attenuation coefficient of copper and silver.

  20. Correcting attenuation effects caused by interactions in the forest canopy in full-waveform airborne laser scanner data

    NASA Astrophysics Data System (ADS)

    Richter, K.; Stelling, N.; Maas, H.-G.

    2014-08-01

    Full-waveform airborne laser scanning offers a great potential for various forestry applications. Especially applications requiring information on the vertical structure of the lower canopy parts benefit from the great amount of information contained in waveform data. To enable the derivation of vertical forest canopy structure, the development of suitable voxel based data analysis methods is straightforward. Beyond extracting additional 3D points, it is very promising to derive the voxel attributes from the digitized waveform directly. For this purpose, the differential backscatter cross sections have to be projected into a Cartesian voxel structure. Thereby the voxel entries represent amplitudes of the cross section and can be interpreted as a local measure for the amount of pulse reflecting matter. However, the "history" of each laser echo pulse is characterized by attenuation effects caused by reflections in higher regions of the crown. As a result, the received waveform signals within the canopy have a lower amplitude than it would be observed for an identical structure without the previous canopy structure interactions (Romanczyk et al., 2012). If the biophysical structure is determined from the raw waveform data, material in the lower parts of the canopy is thus under-represented. To achieve a radiometrically correct voxel space representation the loss of signal strength caused by partial reflections on the path of a laser pulse through the canopy has to be compensated. In this paper, we present an integral approach correcting the waveform at each recorded sample. The basic idea of the procedure is to enhance the waveform intensity values in lower parts of the canopy for portions of the pulse intensity, which have been reflected (and thus blocked) in higher parts of the canopy. The paper will discuss the developed correction method and show results from a validation both with synthetic and real world data.

  1. Evaluation of dosimetry and image of very low-dose computed tomography attenuation correction for pediatric positron emission tomography/computed tomography: phantom study

    NASA Astrophysics Data System (ADS)

    Bahn, Y. K.; Park, H. H.; Lee, C. H.; Kim, H. S.; Lyu, K. Y.; Dong, K. R.; Chung, W. K.; Cho, J. H.

    2014-04-01

    In this study, phantom was used to evaluate attenuation correction computed tomography (CT) dose and image in case of pediatric positron emission tomography (PET)/CT scan. Three PET/CT scanners were used along with acryl phantom in the size for infant and ion-chamber dosimeter. The CT image acquisition conditions were changed from 10 to 20, 40, 80, 100 and 160 mA and from 80 to 100, 120 and 140 kVp, which aimed at evaluating penetrate dose and computed tomography dose indexvolume (CTDIvol) value. And NEMA PET Phantom™ was used to obtain PET image under the same CT conditions in order to evaluate each attenuation-corrected PET image based on standard uptake value (SUV) value and signal-to-noise ratio (SNR). In general, the penetrate dose was reduced by around 92% under the minimum CT conditions (80 kVp and 10 mA) with the decrease in CTDIvol value by around 88%, compared with the pediatric abdomen CT conditions (100 kVp and 100 mA). The PET image with its attenuation corrected according to each CT condition showed no change in SUV value and no influence on the SNR. In conclusion, if the minimum dose CT that is properly applied to body of pediatric patient is corrected for attenuation to ensure that the effective dose is reduced by around 90% or more compared with that for adult patient, this will be useful to reduce radiation exposure level.

  2. Investigation of the effects of attenuation correction on the compatibility of two single photon emission computed tomography systems with the use of segmentation through registration

    NASA Astrophysics Data System (ADS)

    Lampaskis, M.; Killilea, K.; Metherall, P.; Harris, A.; Barber, D.

    2011-09-01

    The aim of this work was to compare images acquired from two Single Photon Emission Computed Tomography (SPECT), performing attenuation correction using different systems, to evaluate the level at which images from these systems can be used on patients to assess myocardial perfusion. The two systems are the Siemens E-cam with profile attenuation correction and the General Electric Hawkeye system. This study was performed using an anthropomorphic torso phantom. The motivation was to examine if attenuation corrected images from these systems are comparable when assessing the myocardial function of patients, with different conditions regarding background (adjacent tissues) activity and the presence or not of defects on the cardiac wall. To perform the analysis of the acquired images specialized software were used to extract information relating to the activity distribution within the cardiac insert (simulated myocardium). This was based on standardized myocardial segmentation used clinically, by the application of image registration using an artificial reference model. The results show that adjacent tissue activity did not affect the ability to detect defects. Further, the application of attenuation correction may reduce the comparability of the two systems to a small degree.

  3. SU-C-9A-06: The Impact of CT Image Used for Attenuation Correction in 4D-PET

    SciTech Connect

    Cui, Y; Bowsher, J; Yan, S; Cai, J; Das, S; Yin, F

    2014-06-01

    Purpose: To evaluate the appropriateness of using 3D non-gated CT image for attenuation correction (AC) in a 4D-PET (gated PET) imaging protocol used in radiotherapy treatment planning simulation. Methods: The 4D-PET imaging protocol in a Siemens PET/CT simulator (Biograph mCT, Siemens Medical Solutions, Hoffman Estates, IL) was evaluated. CIRS Dynamic Thorax Phantom (CIRS Inc., Norfolk, VA) with a moving glass sphere (8 mL) in the middle of its thorax portion was used in the experiments. The glass was filled with {sup 18}F-FDG and was in a longitudinal motion derived from a real patient breathing pattern. Varian RPM system (Varian Medical Systems, Palo Alto, CA) was used for respiratory gating. Both phase-gating and amplitude-gating methods were tested. The clinical imaging protocol was modified to use three different CT images for AC in 4D-PET reconstruction: first is to use a single-phase CT image to mimic actual clinical protocol (single-CT-PET); second is to use the average intensity projection CT (AveIP-CT) derived from 4D-CT scanning (AveIP-CT-PET); third is to use 4D-CT image to do the phase-matched AC (phase-matching- PET). Maximum SUV (SUVmax) and volume of the moving target (glass sphere) with threshold of 40% SUVmax were calculated for comparison between 4D-PET images derived with different AC methods. Results: The SUVmax varied 7.3%±6.9% over the breathing cycle in single-CT-PET, compared to 2.5%±2.8% in AveIP-CT-PET and 1.3%±1.2% in phasematching PET. The SUVmax in single-CT-PET differed by up to 15% from those in phase-matching-PET. The target volumes measured from single- CT-PET images also presented variations up to 10% among different phases of 4D PET in both phase-gating and amplitude-gating experiments. Conclusion: Attenuation correction using non-gated CT in 4D-PET imaging is not optimal process for quantitative analysis. Clinical 4D-PET imaging protocols should consider phase-matched 4D-CT image if available to achieve better accuracy.

  4. Influence of OSEM and segmented attenuation correction in the calculation of standardised uptake values for [18F]FDG PET.

    PubMed

    Visvikis, D; Cheze-LeRest, C; Costa, D C; Bomanji, J; Gacinovic, S; Ell, P J

    2001-09-01

    Standardised Uptake Values (SUVs) are widely used in positron emission tomography (PET) as a semi-quantitative index of fluorine-18 labelled fluorodeoxyglucose uptake. The objective of this study was to investigate any bias introduced in the calculation of SUVs as a result of employing ordered subsets-expectation maximisation (OSEM) image reconstruction and segmented attenuation correction (SAC). Variable emission and transmission time durations were investigated. Both a phantom and a clinical evaluation of the bias were carried out. The software implemented in the GE Advance PET scanner was used. Phantom studies simulating tumour imaging conditions were performed. Since a variable count rate may influence the results obtained using OSEM, similar acquisitions were performed at total count rates of 34 kcps and 12 kcps. Clinical data consisted of 100 patient studies. Emission datasets of 5 and 15 min duration were combined with 15-, 3-, 2- and 1-min transmission datasets for the reconstruction of both phantom and patient studies. Two SUVs were estimated using the average (SUVavg) and the maximum (SUVmax) count density from regions of interest placed well inside structures of interest. The percentage bias of these SUVs compared with the values obtained using a reference image was calculated. The reference image was considered to be the one produced by filtered back-projection (FBP) image reconstruction with measured attenuation correction using the 15-min emission and transmission datasets for each phantom and patient study. A bias of 5%-20% was found for the SUVavg and SUVmax in the case of FBP with SAC using variable transmission times. In the case of OSEM with SAC, the bias increased to 10%-30%. An overall increase of 5%-10% was observed with the use of SUVmax. The 5-min emission dataset led to an increase in the bias of 25%-100%, with the larger increase recorded for the SUVmax. The results suggest that OSEM and SAC with 3 and 2 min transmission may be reliably

  5. Influence of OSEM and segmented attenuation correction in the calculation of standardised uptake values for [(18)F]FDG PET.

    PubMed

    Visvikis, D; Cheze-Lerest, C; Costa, D; Bomanji, J; Gacinovic, S; Ell, P

    2001-09-01

    Standardised Uptake Values (SUVs) are widely used in positron emission tomography (PET) as a semi-quantitative index of fluorine-18 labelled fluorodeoxyglucose uptake. The objective of this study was to investigate any bias introduced in the calculation of SUVs as a result of employing ordered subsets-expectation maximisation (OSEM) image reconstruction and segmented attenuation correction (SAC). Variable emission and transmission time durations were investigated. Both a phantom and a clinical evaluation of the bias were carried out. The software implemented in the GE Advance PET scanner was used. Phantom studies simulating tumour imaging conditions were performed. Since a variable count rate may influence the results obtained using OSEM, similar acquisitions were performed at total count rates of 34 kcps and 12 kcps. Clinical data consisted of 100 patient studies. Emission datasets of 5 and 15 min duration were combined with 15-, 3-, 2- and 1-min transmission datasets for the reconstruction of both phantom and patient studies. Two SUVs were estimated using the average (SUVavg) and the maximum (SUVmax) count density from regions of interest placed well inside structures of interest. The percentage bias of these SUVs compared with the values obtained using a reference image was calculated. The reference image was considered to be the one produced by filtered backprojection (FBP) image reconstruction with measured attenuation correction using the 15-min emission and transmission datasets for each phantom and patient study. A bias of 5%-20% was found for the SUVavg and SUVmax in the case of FBP with SAC using variable transmission times. In the case of OSEM with SAC, the bias increased to 10%-30%. An overall increase of 5%-10% was observed with the use of SUVmax. The 5-min emission dataset led to an increase in the bias of 25%-100%, with the larger increase recorded for the SUVmax. The results suggest that OSEM and SAC with 3 and 2 min transmission may be reliably

  6. Correction.

    PubMed

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  7. An accurate projector gamma correction method for phase-measuring profilometry based on direct optical power detection

    NASA Astrophysics Data System (ADS)

    Liu, Miao; Yin, Shibin; Yang, Shourui; Zhang, Zonghua

    2015-10-01

    Digital projector is frequently applied to generate fringe pattern in phase calculation-based three dimensional (3D) imaging systems. Digital projector often works with camera in this kind of systems so the intensity response of a projector should be linear in order to ensure the measurement precision especially in Phase-Measuring Profilometry (PMP). Some correction methods are often applied to cope with the non-linear intensity response of the digital projector. These methods usually rely on camera and gamma function is often applied to compensate the non-linear response so the correction performance is restricted by the dynamic range of camera. In addition, the gamma function is not suitable to compensate the nonmonotonicity intensity response. This paper propose a gamma correction method by the precisely detecting the optical energy instead of using a plate and camera. A photodiode with high dynamic range and linear response is used to directly capture the light optical from the digital projector. After obtaining the real gamma curve precisely by photodiode, a gray level look-up table (LUT) is generated to correct the image to be projected. Finally, this proposed method is verified experimentally.

  8. Benchmark atomization energy of ethane : importance of accurate zero-point vibrational energies and diagonal Born-Oppenheimer corrections for a 'simple' organic molecule.

    SciTech Connect

    Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2007-06-01

    A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.

  9. Post-exposure sleep deprivation facilitates correctly timed interactions between glucocorticoid and adrenergic systems, which attenuate traumatic stress responses.

    PubMed

    Cohen, Shlomi; Kozlovsky, Nitsan; Matar, Michael A; Kaplan, Zeev; Zohar, Joseph; Cohen, Hagit

    2012-10-01

    compared with exposed-SD animals. Intentional prevention of sleep in the early aftermath of stress exposure may well be beneficial in attenuating traumatic stress-related sequelae. Post-exposure SD may disrupt the consolidation of aversive or fearful memories by facilitating correctly timed interactions between glucocorticoid and adrenergic systems. PMID:22713910

  10. Sectional power-law correction for the accurate determination of lutetium by isotope dilution multiple collector-inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Yuan, Hong-Lin; Gao, Shan; Zong, Chun-Lei; Dai, Meng-Ning

    2009-11-01

    In this study, we employ a sectional power-law (SPL) correction that provides accurate and precise measurements of 176Lu/ 175Lu ratios in geological samples using multiple collector-inductively coupled plasma-mass spectrometry (MC-ICP-MS). Three independent power laws were adopted based on the 176Lu/ 176Yb ratios of samples measured after chemical chromatography. Using isotope dilution (ID) techniques and the SPL correction method, the measured lutetium contents of United States Geological Survey rock standards (BHVO-1, BHVO-2, BCR-2, AGV-1, and G-2) agree well with the recommended values. Results obtained by conventional ICP-MS and INAA are generally higher than those obtained by ID-TIMS and ID-MC-ICP-MS; this discrepancy probably reflects oxide interference and inaccurate corrections.

  11. An accurate scatter measurement and correction technique for cone beam breast CT imaging using scanning sampled measurement (SSM)technique

    NASA Astrophysics Data System (ADS)

    Liu, Xinming; Shaw, Chris C.; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C.; Kappadath, S. Cheenu

    2006-03-01

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images. Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  12. Additional correction for energy transfer efficiency calculation in filter-based Förster resonance energy transfer microscopy for more accurate results

    NASA Astrophysics Data System (ADS)

    Sun, Yuansheng; Periasamy, Ammasi

    2010-03-01

    Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.

  13. Evaluation of Iterative Reconstruction Method and Attenuation Correction in Brain Dopamine Transporter SPECT Using an Anthropomorphic Striatal Phantom

    PubMed Central

    Maebatake, Akira; Imamura, Ayaka; Kodera, Yui; Yamashita, Yasuo; Himuro, Kazuhiko; Baba, Shingo; Miwa, Kenta; Sasaki, Masayuki

    2016-01-01

    Objective(s): The aim of this study was to determine the optimal reconstruction parameters for iterative reconstruction in different devices and collimators for dopamine transporter (DaT) single-photon emission computed tomography (SPECT). The results were compared between filtered back projection (FBP) and different attenuation correction (AC) methods. Methods: An anthropomorphic striatal phantom was filled with 123I solutions at different striatum-to-background radioactivity ratios. Data were acquired using two SPECT/CT devices, equipped with a low-to-medium-energy general-purpose collimator (cameras A-1 and B-1) and a low-energy high-resolution (LEHR) collimator (cameras A-2 and B-2). The SPECT images were once reconstructed by FBP using Chang’s AC and once by ordered subset expectation maximization (OSEM) using both CTAC and Chang’s AC; moreover, scatter correction was performed. OSEM on cameras A-1 and A-2 included resolution recovery (RR). The images were analyzed, using the specific binding ratio (SBR). Regions of interest for the background were placed on both frontal and occipital regions. Results: The optimal number of iterations and subsets was 10i10s on camera A-1, 10i5s on camera A-2, and 7i6s on cameras B-1 and B-2. The optimal full width at half maximum of the Gaussian filter was 2.5 times the pixel size. In the comparison between FBP and OSEM, the quality was superior on OSEM-reconstructed images, although edge artifacts were observed in cameras A-1 and A-2. The SBR recovery of OSEM was higher than that of FBP on cameras A-1 and A-2, while no significant difference was detected on cameras B-1 and B-2. Good linearity of SBR was observed in all cameras. In the comparison between Chang’s AC and CTAC, a significant correlation was observed on all cameras. The difference in the background region influenced SBR differently in Chang’s AC and CTAC on cameras A-1 and B-1. Conclusion: Iterative reconstruction improved image quality on all cameras

  14. A single CT for attenuation correction of both rest and stress SPECT myocardial perfusion imaging: a retrospective feasibility study

    PubMed Central

    Ahlman, Mark A; Nietert, Paul J; Wahlquist, Amy E; Serguson, Jill M; Berry, Max W; Suranyi, Pal; Liu, Songtao; Spicer, Kenneth M

    2014-01-01

    Purpose: In the effort to reduce radiation exposure to patients undergoing myocardial perfusion imaging (MPI) with SPECT/CT, we evaluate the feasibility of a single CT for attenuation correction (AC) of single-day rest (R)/stress (S) perfusion. Methods: Processing of 20 single isotope and 20 dual isotope MPI with perfusion defects were retrospectively repeated in three steps: (1) the standard method using a concurrent R-CT for AC of R-SPECT and S-CT for S-SPECT; (2) the standard method repeated; and (3) with the R-CT used for AC of S-SPECT, and the S-CT used for AC of R-SPECT. Intra-Class Correlation Coefficients (ICC) and Choen’s kappa were used to measure intra-operator variability in sum scoring. Results: The highest level of intra-operator reliability was seen with the reproduction of the sum rest score (SRS) and sum stress score (SSS) (ICC > 95%). ICCs were > 85% for SRS and SSS when alternate CTs were used for AC, but when sum difference scores were calculated, ICC values were much lower (~22% to 27%), which may imply that neither CT substitution resulted in a reproducible difference score. Similar results were seen when evaluating dichotomous outcomes (sum scores difference of ≥ 4) when comparing different processing techniques (kappas ~0.32 to 0.43). Conclusions: When a single CT is used for AC of both rest and stress SPECT, there is disproportionately high variability in sum scoring that is independent of user error. This information can be used to direct further investigation in radiation reduction for common imaging exams in nuclear medicine. PMID:24482701

  15. The distribution of highly stable millimeter-wave signals over different optical fiber links with accurate phase-correction

    NASA Astrophysics Data System (ADS)

    Liu, Zhangweiyi; Wang, Xiaocheng; Sun, Dongning; Dong, Yi; Hu, Weisheng

    2015-08-01

    We have demonstrated an optical generation of highly stable millimeter-wave signal distribution system, which transfers a 300GHz signal to two remote ends over different optical fiber links for signal stability comparison. The transmission delay variations of each fiber link caused by temperature and mechanical perturbations are compensated by high-precise phase-correction system. The residual phase noise between two remote end signals is detected by dual-heterodyne phase error transfer and reaches -46dBc/Hz at 1 Hz frequency offset from the carrier. The relative instability is 8×10-17 at 1000s averaging time.

  16. Accurately evaluating Young's modulus of polymers through nanoindentations: A phenomenological correction factor to the Oliver and Pharr procedure

    NASA Astrophysics Data System (ADS)

    Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander

    2006-10-01

    The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.

  17. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  18. Harmonic allocation of authorship credit: source-level correction of bibliometric bias assures accurate publication and citation analysis.

    PubMed

    Hagen, Nils T

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.

  19. Corrections

    NASA Astrophysics Data System (ADS)

    2012-09-01

    The feature article "Material advantage?" on the effects of technology and rule changes on sporting performance (July pp28-30) stated that sprinters are less affected by lower oxygen levels at high altitudes because they run "aerobically". They run anaerobically. The feature about the search for the Higgs boson (August pp22-26) incorrectly gave the boson's mass as roughly 125 MeV it is 125 GeV, as correctly stated elsewhere in the issue. The article also gave a wrong value for the intended collision energy of the Superconducting Super Collider, which was designed to collide protons with a total energy of 40 TeV.

  20. Correction.

    PubMed

    2015-05-22

    The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426

  1. Enhancing the quality of radiographic images acquired with point-like gamma-ray sources through correction of the beam divergence and attenuation

    SciTech Connect

    Silvani, M. I.; Almeida, G. L.; Lopes, R. T.

    2014-11-11

    Radiographic images acquired with point-like gamma-ray sources exhibit a desirable low penumbra effects specially when positioned far away from the set object-detector. Such an arrangement frequently is not affordable due to the limited flux provided by a distant source. A closer source, however, has two main drawbacks, namely the degradation of the spatial resolution - as actual sources are only approximately punctual - and the non-homogeneity of the beam hitting the detector, which creates a false attenuation map of the object being inspected. This non-homogeneity is caused by the beam divergence itself and by the different thicknesses traversed the beam even if the object were an homogeneous flat plate. In this work, radiographic images of objects with different geometries, such as flat plates and pipes have undergone a correction of beam divergence and attenuation addressing the experimental verification of the capability and soundness of an algorithm formerly developed to generate and process synthetic images. The impact of other parameters, including source-detector gap, attenuation coefficient, ratio defective-to-main hull thickness and counting statistics have been assessed for specifically tailored test-objects aiming at the evaluation of the ability of the proposed method to deal with different boundary conditions. All experiments have been carried out with an X-ray sensitive Imaging Plate and reactor-produced {sup 198}Au and {sup 165}Dy sources. The results have been compared with other technique showing a better capability to correct the attenuation map of inspected objects unveiling their inner structure otherwise concealed by the poor contrast caused by the beam divergence and attenuation, in particular for those regions far apart from the vertical of the source.

  2. Enhancing the quality of radiographic images acquired with point-like gamma-ray sources through correction of the beam divergence and attenuation

    NASA Astrophysics Data System (ADS)

    Silvani, M. I.; Almeida, G. L.; Lopes, R. T.

    2014-11-01

    Radiographic images acquired with point-like gamma-ray sources exhibit a desirable low penumbra effects specially when positioned far away from the set object-detector. Such an arrangement frequently is not affordable due to the limited flux provided by a distant source. A closer source, however, has two main drawbacks, namely the degradation of the spatial resolution - as actual sources are only approximately punctual - and the non-homogeneity of the beam hitting the detector, which creates a false attenuation map of the object being inspected. This non-homogeneity is caused by the beam divergence itself and by the different thicknesses traversed the beam even if the object were an homogeneous flat plate. In this work, radiographic images of objects with different geometries, such as flat plates and pipes have undergone a correction of beam divergence and attenuation addressing the experimental verification of the capability and soundness of an algorithm formerly developed to generate and process synthetic images. The impact of other parameters, including source-detector gap, attenuation coefficient, ratio defective-to-main hull thickness and counting statistics have been assessed for specifically tailored test-objects aiming at the evaluation of the ability of the proposed method to deal with different boundary conditions. All experiments have been carried out with an X-ray sensitive Imaging Plate and reactor-produced 198Au and 165Dy sources. The results have been compared with other technique showing a better capability to correct the attenuation map of inspected objects unveiling their inner structure otherwise concealed by the poor contrast caused by the beam divergence and attenuation, in particular for those regions far apart from the vertical of the source.

  3. The accurate calculation of the band gap of liquid water by means of GW corrections applied to plane-wave density functional theory molecular dynamics simulations.

    PubMed

    Fang, Changming; Li, Wun-Fan; Koster, Rik S; Klimeš, Jiří; van Blaaderen, Alfons; van Huis, Marijn A

    2015-01-01

    Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initio approaches can be applied only to small numbers of atoms, while large numbers of atoms are required for having configurations that are representative of a liquid. Here we show that a high-accuracy value for the electronic band gap of water can be obtained by combining beyond-DFT methods and statistical time-averaging. Liquid water is simulated at 300 K using a plane-wave density functional theory molecular dynamics (PW-DFT-MD) simulation and a van der Waals density functional (optB88-vdW). After applying a self-consistent GW correction the band gap of liquid water at 300 K is calculated as 7.3 eV, in good agreement with recent experimental observations in the literature (6.9 eV). For simulations of phase transformations and chemical reactions in water or aqueous solutions whereby an accurate description of the electronic structure is required, we suggest to use these advanced GW corrections in combination with the statistical analysis of quantum mechanical MD simulations.

  4. A study on the change in image quality before and after an attenuation correction with the use of a CT image in a SPECT/CT scan

    NASA Astrophysics Data System (ADS)

    Park, Yong-Soon; Kim, Woo-Hyun; Shim, Dong-Oh; Kim, Ho-Sung; Chung, Woon-Kwan; Cho, Jae-Hwan

    2012-12-01

    This study compared the SPECT (single-photon emission computed tomography) images before and after applying an attenuation correction by using the CT (computed tomography) image in a SPECT/CT scan and examined depending of the change in image quality on the CT dose. A flangeless Esser PET (positron emission tomography) Phantom was used to evaluate the image quality for the Precedence 16 SPECT/CT system manufactured by Philips. The experimental method was to obtain a SPECT image and a CT image of a flangeless Esser PET Phantom to acquire an attenuation-corrected SPECT image. A ROI (region of interest) was then set up at a hot spot of the acquired image to measure the SNR (signal to noise ratio) and the FWHM (full width at half maximum) and to compare the image quality with that of an unattenuation-corrected SPECT image. To evaluate the quality of a SPECT image, we set the ROI as a cylinder diameter (25, 16, 12, and 8 mm) and the BKG (background) radioactivity of the phantom images was obtained when each CT condition was changed. Subsequently, the counts were compared to measure the SNR. The FWHM of the smallest cylinder (8 mm) was measured to compare the image quality. A comparison of the SPECT images with and without attenuation correction revealed 5.01-fold, 4.77 fold, 4.43-fold, 4.38-fold, and 5.13-fold differences in SNR for the 25-mm cylinder, 16-mm cylinder, 12-mm cylinder, 8-mm cylinder, and BKG, respectively. In the phantom image obtained when the CT dose was changed, the FWHM of the 8-mm cylinder showed almost no difference under each condition regardless of the changes in kVp and mAs.

  5. [Radiometers performance attenuation and data correction in long-term observation of total radiation and photosynthetically active radiation in typical forest ecosystems in China].

    PubMed

    Zhu, Zhi-Lin; Sun, Xiao-Min; Yu, Gui-Rui; Wen, Xue-Fa; Zhang, Yi-Ping; Han, Shi-Jie; Yan, Jun-Hua; Wang, Hui-Min

    2011-11-01

    Based on the total radiation and photosynthetically active radiation (PAR) observations with net radiometer (CNR1) and quantum sensor (Li-190SB) in 4 ChinaFLUX forest sites (Changbaishan, Qianyanzhou, Dinghushan, and Xishuangbanna) in 2003-2008, this paper analyzed the uncertainties and the radiometers performance changes in long-term and continuous field observation. The results showed that the 98% accuracy of the total radiation measured with CNR1 (Q(cNR1)) could satisfy the technical criterion for the sites except Xishuangbanna where the Q(CNR1) was averagely about 7% lower than Q(CM11), the radiation measured with high accuracy pyranometer CM11. For most sites, though the temperature had definite effects on the performance of CNR1, the effects were still within the allowable range of the accuracy of the instrument. Besides temperature, the seasonal fog often occurred in tropical rain forests in Xishuangbanna also had effects on the performance of CNR1. Based on the long-term variations of PAR, especially its ratio to total radiation in the 4 sites, it was found that quantum sensor (Li-190SB) had obvious performance attenuation, with the mean annual attenuation rate being about 4%. To correct the observation error caused by Li-190SB, an attempt was made to give a post-correction of the PAR observations, which could basically eliminate the quantum sensor's performance attenuation due to long-term field measurement.

  6. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    SciTech Connect

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-04-15

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4{+-}1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  7. [The Optimal Reconstruction Parameters by Scatter and Attenuation Corrections Using Multi-focus Collimator System in Thallium-201 Myocardial Perfusion SPECT Study].

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo

    2015-11-01

    The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm. PMID:26596202

  8. [The Optimal Reconstruction Parameters by Scatter and Attenuation Corrections Using Multi-focus Collimator System in Thallium-201 Myocardial Perfusion SPECT Study].

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo

    2015-11-01

    The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm.

  9. Validation of model-based pelvis bone segmentation from MR images for PET/MR attenuation correction

    NASA Astrophysics Data System (ADS)

    Renisch, S.; Blaffert, T.; Tang, J.; Hu, Z.

    2012-02-01

    With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET) systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.

  10. High-throughput amplicon sequencing of rRNA genes requires a copy number correction to accurately reflect the effects of management practices on soil nematode community structure.

    PubMed

    Darby, B J; Todd, T C; Herman, M A

    2013-11-01

    Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available. PMID:24103081

  11. High-throughput amplicon sequencing of rRNA genes requires a copy number correction to accurately reflect the effects of management practices on soil nematode community structure.

    PubMed

    Darby, B J; Todd, T C; Herman, M A

    2013-11-01

    Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available.

  12. Evaluation of the dependence of the exposure dose on the attenuation correction in brain PET/CT scans using 18F-FDG

    NASA Astrophysics Data System (ADS)

    Choi, Eun-Jin; Jeong, Moon-Taeg; Jang, Seong-Joo; Choi, Nam-Gil; Han, Jae-Bok; Yang, Nam-Hee; Dong, Kyung-Rae; Chung, Woon-Kwan; Lee, Yun-Jong; Ryu, Young-Hwan; Choi, Sung-Hyun; Seong, Kyeong-Jeong

    2014-01-01

    This study examined whether scanning could be performed with minimum dose and minimum exposure to the patient after an attenuation correction. A Hoffman 3D Brain Phantom was used in BIO_40 and D_690 PET/CT scanners, and the CT dose for the equipment was classified as a low dose (minimum dose), medium dose (general dose for scanning) and high dose (dose with use of contrast medium) before obtaining the image at a fixed kilo-voltage-peak (kVp) and milliampere (mA) that were adjusted gradually in 17-20 stages. A PET image was then obtained to perform an attenuation correction based on an attenuation map before analyzing the dose difference. Depending on tube current in the range of 33-190 milliampere-second (mAs) when BIO_40 was used, a significant difference in the effective dose was observed between the minimum and the maximum mAs (p < 0.05). According to a Scheffe post-hoc test, the ratio of the minimum to the maximum of the effective dose was increased by approximately 5.26-fold. Depending on the change in the tube current in the range of 10-200 mA when D_690 was used, a significant difference in the effective dose was observed between the minimum and the maximum of mA (p < 0.05). The Scheffe posthoc test revealed a 20.5-fold difference. In conclusion, because effective exposure dose increases with increasing operating current, it is possible to reduce the exposure limit in a brain scan can be reduced if the CT dose can be minimized for a transmission scan.

  13. Correction of quantification errors in pelvic and spinal lesions caused by ignoring higher photon attenuation of bone in [{sup 18}F]NaF PET/MR

    SciTech Connect

    Schramm, Georg Maus, Jens; Hofheinz, Frank; Petr, Jan; Lougovski, Alexandr; Beuthien-Baumann, Bettina; Oehme, Liane; Platzek, Ivan; Hoff, Jörg van den

    2015-11-15

    Purpose: MR-based attenuation correction (MRAC) in routine clinical whole-body positron emission tomography and magnetic resonance imaging (PET/MRI) is based on tissue type segmentation. Due to lack of MR signal in cortical bone and the varying signal of spongeous bone, standard whole-body segmentation-based MRAC ignores the higher attenuation of bone compared to the one of soft tissue (MRAC{sub nobone}). The authors aim to quantify and reduce the bias introduced by MRAC{sub nobone} in the standard uptake value (SUV) of spinal and pelvic lesions in 20 PET/MRI examinations with [{sup 18}F]NaF. Methods: The authors reconstructed 20 PET/MR [{sup 18}F]NaF patient data sets acquired with a Philips Ingenuity TF PET/MRI. The PET raw data were reconstructed with two different attenuation images. First, the authors used the vendor-provided MRAC algorithm that ignores the higher attenuation of bone to reconstruct PET{sub nobone}. Second, the authors used a threshold-based algorithm developed in their group to automatically segment bone structures in the [{sup 18}F]NaF PET images. Subsequently, an attenuation coefficient of 0.11 cm{sup −1} was assigned to the segmented bone regions in the MRI-based attenuation image (MRAC{sub bone}) which was used to reconstruct PET{sub bone}. The automatic bone segmentation algorithm was validated in six PET/CT [{sup 18}F]NaF examinations. Relative SUV{sub mean} and SUV{sub max} differences between PET{sub bone} and PET{sub nobone} of 8 pelvic and 41 spinal lesions, and of other regions such as lung, liver, and bladder, were calculated. By varying the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}, the authors investigated its influence on the reconstructed SUVs of the lesions. Results: The comparison of [{sup 18}F]NaF-based and CT-based bone segmentation in the six PET/CT patients showed a Dice similarity of 0.7 with a true positive rate of 0.72 and a false discovery rate of 0.33. The [{sup 18}F]NaF-based bone

  14. Respiration-Averaged CT for Attenuation Correction of PET Images – Impact on PET Texture Features in Non-Small Cell Lung Cancer Patients

    PubMed Central

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li

    2016-01-01

    Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211

  15. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    SciTech Connect

    Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu

    2014-10-15

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in

  16. Statistical analysis of accurate prediction of local atmospheric optical attenuation with a new model according to weather together with beam wandering compensation system: a season-wise experimental investigation

    NASA Astrophysics Data System (ADS)

    Arockia Bazil Raj, A.; Padmavathi, S.

    2016-07-01

    Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.

  17. CT-based attenuation correction in the calculation of semi-quantitative indices of [18F]FDG uptake in PET.

    PubMed

    Visvikis, D; Costa, D C; Croasdale, I; Lonn, A H R; Bomanji, J; Gacinovic, S; Ell, P J

    2003-03-01

    The introduction of combined PET/CT systems has a number of advantages, including the utilisation of CT images for PET attenuation correction (AC). The potential advantage compared with existing methodology is less noisy transmission maps within shorter times of acquisition. The objective of our investigation was to assess the accuracy of CT attenuation correction (CTAC) and to study resulting bias and signal to noise ratio (SNR) in image-derived semi-quantitative uptake indices. A combined PET/CT system (GE Discovery LS) was used. Different size phantoms containing variable density components were used to assess the inherent accuracy of a bilinear transformation in the conversion of CT images to 511 keV attenuation maps. This was followed by a phantom study simulating tumour imaging conditions, with a tumour to background ratio of 5:1. An additional variable was the inclusion of contrast agent at different concentration levels. A CT scan was carried out followed by 5 min emission with 1-h and 3-min transmission frames. Clinical data were acquired in 50 patients, who had a CT scan under normal breathing conditions (CTAC(nb)) or under breath-hold with inspiration (CTAC(insp)) or expiration (CTAC(exp)), followed by a PET scan of 5 and 3 min per bed position for the emission and transmission scans respectively. Phantom and patient studies were reconstructed using segmented AC (SAC) and CTAC. In addition, measured AC (MAC) was performed for the phantom study using the 1-h transmission frame. Comparing the attenuation coefficients obtained using the CT- and the rod source-based attenuation maps, differences of 3% and <6% were recorded before and after segmentation of the measured transmission maps. Differences of up to 6% and 8% were found in the average count density (SUV(avg)) between the phantom images reconstructed with MAC and those reconstructed with CTAC and SAC respectively. In the case of CTAC, the difference increased up to 27% with the presence of contrast

  18. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

    PubMed

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A

    2013-01-01

    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  19. Accurate real-time ionospheric corrections as the key to extend the centimeter-error-level GNSS navigation at continental scale (WARTK)

    NASA Astrophysics Data System (ADS)

    Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.

    2007-05-01

    The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere

  20. Measurement of attenuation coefficients of the fundamental and second harmonic waves in water

    NASA Astrophysics Data System (ADS)

    Zhang, Shuzeng; Jeong, Hyunjo; Cho, Sungjong; Li, Xiongbing

    2016-02-01

    Attenuation corrections in nonlinear acoustics play an important role in the study of nonlinear fluids, biomedical imaging, or solid material characterization. The measurement of attenuation coefficients in a nonlinear regime is not easy because they depend on the source pressure and requires accurate diffraction corrections. In this work, the attenuation coefficients of the fundamental and second harmonic waves which come from the absorption of water are measured in nonlinear ultrasonic experiments. Based on the quasilinear theory of the KZK equation, the nonlinear sound field equations are derived and the diffraction correction terms are extracted. The measured sound pressure amplitudes are adjusted first for diffraction corrections in order to reduce the impact on the measurement of attenuation coefficients from diffractions. The attenuation coefficients of the fundamental and second harmonics are calculated precisely from a nonlinear least squares curve-fitting process of the experiment data. The results show that attenuation coefficients in a nonlinear condition depend on both frequency and source pressure, which are much different from a linear regime. In a relatively lower drive pressure, the attenuation coefficients increase linearly with frequency. However, they present the characteristic of nonlinear growth in a high drive pressure. As the diffraction corrections are obtained based on the quasilinear theory, it is important to use an appropriate source pressure for accurate attenuation measurements.

  1. Feasibility of simultaneous whole-brain imaging on an integrated PET-MRI system using an enhanced 2-point Dixon attenuation correction method

    PubMed Central

    Anazodo, Udunna C.; Thiessen, Jonathan D.; Ssali, Tracy; Mandel, Jonathan; Günther, Matthias; Butler, John; Pavlosky, William; Prato, Frank S.; Thompson, R. Terry; St. Lawrence, Keith S.

    2015-01-01

    Purpose: To evaluate a potential approach for improved attenuation correction (AC) of PET in simultaneous PET and MRI brain imaging, a straightforward approach that adds bone information missing on Dixon AC was explored. Methods: Bone information derived from individual T1-weighted MRI data using segmentation tools in SPM8, were added to the standard Dixon AC map. Percent relative difference between PET reconstructed with Dixon+bone and with Dixon AC maps were compared across brain regions of 13 oncology patients. The clinical potential of the improved Dixon AC was investigated by comparing relative perfusion (rCBF) measured with arterial spin labeling to relative glucose uptake (rPETdxbone) measured simultaneously with 18F-flurodexoyglucose in several regions across the brain. Results: A gradual increase in PET signal from center to the edge of the brain was observed in PET reconstructed with Dixon+bone. A 5–20% reduction in regional PET signals were observed in data corrected with standard Dixon AC maps. These regional underestimations of PET were either reduced or removed when Dixon+bone AC was applied. The mean relative correlation coefficient between rCBF and rPETdxbone was r = 0.53 (p < 0.001). Marked regional variations in rCBF-to-rPET correlation were observed, with the highest associations in the caudate and cingulate and the lowest in limbic structures. All findings were well matched to observations from previous studies conducted with PET data reconstructed with computed tomography derived AC maps. Conclusion: Adding bone information derived from T1-weighted MRI to Dixon AC maps can improve underestimation of PET activity in hybrid PET-MRI neuroimaging. PMID:25601825

  2. ALPHA ATTENUATION DUE TO DUST LOADING

    SciTech Connect

    Dailey, A; Dennis Hadlock, D

    2007-08-09

    Previous studies had been done in order to show the attenuation of alpha particles in filter media. These studies provided an accurate correction for this attenuation, but there had not yet been a study with sufficient results to properly correct for attenuation due to dust loading on the filters. At the Savannah River Site, filter samples are corrected for attenuation due to dust loading at 20%. Depending on the facility the filter comes from and the duration of the sampling period, the proper correction factor may vary. The objective of this study was to determine self-absorption curves for each of three counting instruments. Prior work indicated significant decreases in alpha count rate (as much as 38%) due to dust loading, especially on filters from facilities where sampling takes place over long intervals. The alpha count rate decreased because of a decrease in the energy of the alpha. The study performed resulted in a set of alpha absorption curves for each of three detectors. This study also took into account the affects of the geometry differences in the different counting equipment used.

  3. MO-G-17A-03: MR-Based Cortical Bone Segmentation for PET Attenuation Correction with a Non-UTE 3D Fast GRE Sequence

    SciTech Connect

    Ai, H; Pan, T; Hwang, K

    2014-06-15

    Purpose: To determine the feasibility of identifying cortical bone on MR images with a short-TE 3D fast-GRE sequence for attenuation correction of PET data in PET/MR. Methods: A water-fat-bone phantom was constructed with two pieces of beef shank. MR scans were performed on a 3T MR scanner (GE Discovery™ MR750). A 3D GRE sequence was first employed to measure the level of residual signal in cortical bone (TE{sub 1}/TE{sub 2}/TE{sub 3}=2.2/4.4/6.6ms, TR=20ms, flip angle=25°). For cortical bone segmentation, a 3D fast-GRE sequence (TE/TR=0.7/1.9ms, acquisition voxel size=2.5×2.5×3mm{sup 3}) was implemented along with a 3D Dixon sequence (TE{sub 1}/TE{sub 2}/TR=1.2/2.3/4.0ms, acquisition voxel size=1.25×1.25×3mm{sup 3}) for water/fat imaging. Flip angle (10°), acquisition bandwidth (250kHz), FOV (480×480×144mm{sup 3}) and reconstructed voxel size (0.94×0.94×1.5mm{sup 3}) were kept the same for both sequences. Soft tissue and fat tissue were first segmented on the reconstructed water/fat image. A tissue mask was created by combining the segmented water/fat masks, which was then applied on the fast-GRE image (MRFGRE). A second mask was created to remove the Gibbs artifacts present in regions in close vicinity to the phantom. MRFGRE data was smoothed with a 3D anisotropic diffusion filter for noise reduction, after which cortical bone and air was separated using a threshold determined from the histogram. Results: There is signal in the cortical bone region in the 3D GRE images, indicating the possibility of separating cortical bone and air based on signal intensity from short-TE MR image. The acquisition time for the 3D fast-GRE sequence was 17s, which can be reduced to less than 10s with parallel imaging. The attenuation image created from water-fat-bone segmentation is visually similar compared to reference CT. Conclusion: Cortical bone and air can be separated based on intensity in MR image with a short-TE 3D fast-GRE sequence. Further research is required

  4. Influence of attenuation correction on transient left ventricular dilation in dual isotope myocardial perfusion imaging in patients with known or suspected coronary artery disease.

    PubMed

    Brodov, Yafim; Frenkel, Alex; Chouraqui, Pierre; Przewloka, Kinga; Rispler, Shmuel; Abadi, Sobhi; Keidar, Zohar

    2012-07-01

    The aim of this study was to assess the effect of attenuation correction (AC) on left ventricular (LV) volumes and LV transient ischemic dilatation (TID) during dual-isotope single-photon emission computer tomographic (SPECT) myocardial perfusion imaging (MPI). Ninety-six patients (mean age 58 ± 11 years, 15% women, 38 patients completed exercise and 58 dipyridamole pharmacologic stress tests) assessed for known or suspected coronary artery disease underwent dual-isotope thallium-201 rest and technetium-99m sestamibi stress SPECT MPI with computed tomography-based AC. The TID ratio was calculated separately for non-AC and AC SPECT MPI studies as the ratio of the LV endocardial volume at stress divided by LV endocardial volume at rest. The mean and range of the gated LV ejection fraction during exercise and pharmacologic stress was 54 ± 12% (29% to 80%) and 58 ± 12% (27% to 80%), respectively. In the exercise stress group, the same mean LV endocardial volumes in non-AC and AC stress (76.4 ± 30 and 76.5 ± 28) and rest (66.3 ± 26 and 66.4 ± 24) studies were found (p = 0.90). There was no statistical difference between the mean exercise TID ratio in non-AC and AC studies (1.27 vs 1.31, respectively, p = 0.10). The same mean LV endocardial volumes in non-AC and AC in pharmacologic stress (79.9 ± 42 and 80 ± 41) and rest (71.4 ± 41 and 72.3 ± 37), respectively, were found (p = 0.50). There was no statistical difference between the mean dipyridamole TID ratio in non-AC and AC studies (1.20 vs 1.17, respectively, p = 0.10). In conclusion, LV volumes and TID indexes obtained on SPECT MPI with exercise or pharmacologic stress using dipyridamole are not affected by AC.

  5. Comparison of effective dose and lifetime risk of cancer incidence of CT attenuation correction acquisitions and radiopharmaceutical administration for myocardial perfusion imaging

    PubMed Central

    Szczepura, K; Hogg, P

    2014-01-01

    Objective: To measure the organ dose and calculate effective dose from CT attenuation correction (CTAC) acquisitions from four commonly used gamma camera single photon emission CT/CT systems. Methods: CTAC dosimetry data was collected using thermoluminescent dosemeters on GE Healthcare's Infinia™ Hawkeye™ (GE Healthcare, Buckinghamshire, UK) four- and single-slice systems, Siemens Symbia™ T6 (Siemens Healthcare, Erlangen, Germany) and the Philips Precedence (Philips Healthcare, Amsterdam, Netherlands). Organ and effective dose from the administration of 99mTc-tetrofosmin and 99mTc-sestamibi were calculated using International Commission of Radiological Protection reports 80 and 106. Using these data, the lifetime biological risk was calculated. Results: The Siemens Symbia gave the lowest CTAC dose (1.8 mSv) followed by the GE Infinia Hawkeye single-slice (1.9 mSv), GE Infinia Hawkeye four-slice (2.5 mSv) and Philips Precedence v. 3.0. Doses were significantly lower than the calculated doses from radiopharmaceutical administration (11 and 14 mSv for 99mTc-tetrofosmin and 99mTc-sestamibi, respectively). Overall lifetime biological risks were lower, which suggests that using CTAC data posed minimal risk to the patient. Comparison of data for breast tissue demonstrated a higher risk than that from the radiopharmaceutical administration. Conclusion: CTAC doses were confirmed to be much lower than those from radiopharmaceutical administration. The localized nature of the CTAC exposure compared to the radiopharmaceutical biological distribution indicated dose and risk to the breast to be higher. Advances in knowledge: This research proved that CTAC is a comparatively low-dose acquisition. However, it has been shown that there is increased risk for breast tissue especially in the younger patients. As per legislation, justification is required and CTAC should only be used in situations that demonstrate sufficient net benefit. PMID:24998249

  6. Accurate evaluations of the field shift and lowest-order QED correction for the ground 1{sup 1}S−states of some light two-electron ions

    SciTech Connect

    Frolov, Alexei M.; Wardlaw, David M.

    2014-09-14

    Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.

  7. SU-E-I-86: Ultra-Low Dose Computed Tomography Attenuation Correction for Pediatric PET CT Using Adaptive Statistical Iterative Reconstruction (ASiR™)

    SciTech Connect

    Brady, S; Shulkin, B

    2015-06-15

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co

  8. Whole-Body PET/MR Imaging: Quantitative Evaluation of a Novel Model-Based MR Attenuation Correction Method Including Bone

    PubMed Central

    Paulus, Daniel H.; Quick, Harald H.; Geppert, Christian; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Faul, David; Boada, Fernando; Friedman, Kent P.; Koesters, Thomas

    2016-01-01

    In routine whole-body PET/MR hybrid imaging, attenuation correction (AC) is usually performed by segmentation methods based on a Dixon MR sequence providing up to 4 different tissue classes. Because of the lack of bone information with the Dixon-based MR sequence, bone is currently considered as soft tissue. Thus, the aim of this study was to evaluate a novel model-based AC method that considers bone in whole-body PET/MR imaging. Methods The new method (“Model”) is based on a regular 4-compartment segmentation from a Dixon sequence (“Dixon”). Bone information is added using a model-based bone segmentation algorithm, which includes a set of prealigned MR image and bone mask pairs for each major body bone individually. Model was quantitatively evaluated on 20 patients who underwent whole-body PET/MR imaging. As a standard of reference, CT-based μ-maps were generated for each patient individually by nonrigid registration to the MR images based on PET/CT data. This step allowed for a quantitative comparison of all μ-maps based on a single PET emission raw dataset of the PET/MR system. Volumes of interest were drawn on normal tissue, soft-tissue lesions, and bone lesions; standardized uptake values were quantitatively compared. Results In soft-tissue regions with background uptake, the average bias of SUVs in background volumes of interest was 2.4% ± 2.5% and 2.7% ± 2.7% for Dixon and Model, respectively, compared with CT-based AC. For bony tissue, the −25.5% ± 7.9% underestimation observed with Dixon was reduced to −4.9% ± 6.7% with Model. In bone lesions, the average underestimation was −7.4% ± 5.3% and −2.9% ± 5.8% for Dixon and Model, respectively. For soft-tissue lesions, the biases were 5.1% ± 5.1% for Dixon and 5.2% ± 5.2% for Model. Conclusion The novel MR-based AC method for whole-body PET/MR imaging, combining Dixon-based soft-tissue segmentation and model-based bone estimation, improves PET quantification in whole-body hybrid PET

  9. CT-Based Attenuation Correction in Brain SPECT/CT Can Improve the Lesion Detectability of Voxel-Based Statistical Analyses

    PubMed Central

    Kato, Hiroki; Shimosegawa, Eku; Fujino, Koichi; Hatazawa, Jun

    2016-01-01

    Background Integrated SPECT/CT enables non-uniform attenuation correction (AC) using built-in CT instead of the conventional uniform AC. The effect of CT-based AC on voxel-based statistical analyses of brain SPECT findings has not yet been clarified. Here, we assessed differences in the detectability of regional cerebral blood flow (CBF) reduction using SPECT voxel-based statistical analyses based on the two types of AC methods. Subjects and Methods N-isopropyl-p-[123I]iodoamphetamine (IMP) CBF SPECT images were acquired for all the subjects and were reconstructed using 3D-OSEM with two different AC methods: Chang’s method (Chang’s AC) and the CT-based AC method. A normal database was constructed for the analysis using SPECT findings obtained for 25 healthy normal volunteers. Voxel-based Z-statistics were also calculated for SPECT findings obtained for 15 patients with chronic cerebral infarctions and 10 normal subjects. We assumed that an analysis with a higher specificity would likely produce a lower mean absolute Z-score for normal brain tissue, and a more sensitive voxel-based statistical analysis would likely produce a higher absolute Z-score for in old infarct lesions, where the CBF was severely decreased. Results The inter-subject variation in the voxel values in the normal database was lower using CT-based AC, compared with Chang’s AC, for most of the brain regions. The absolute Z-score indicating a SPECT count reduction in infarct lesions was also significantly higher in the images reconstructed using CT-based AC, compared with Chang’s AC (P = 0.003). The mean absolute value of the Z-score in the 10 intact brains was significantly lower in the images reconstructed using CT-based AC than in those reconstructed using Chang’s AC (P = 0.005). Conclusions Non-uniform CT-based AC by integrated SPECT/CT significantly improved sensitivity and the specificity of the voxel-based statistical analyses for regional SPECT count reductions, compared with

  10. Controllable attenuators

    NASA Astrophysics Data System (ADS)

    Krylov, G. M.; Khoniak, E. I.; Tynynyka, A. N.; Iliushenko, V. N.; Sikolenko, S. F.

    Methods for the synthesis of controllable attenuators and their implementations are examined. In particular, attention is given to the general properties of controllable attenuators, control elements, types of controllable attenuators and methods of their analysis, and synthesis of the control characteristic of attenuators. The discussion also covers the efficiency of attenuator control, the use of transmission line segments in wide-band controllable attenuators, and attenuators with a discretely controlled transmission coefficient.

  11. Band-structure calculations of noble-gas and alkali halide solids using accurate Kohn-Sham potentials with self-interaction correction

    SciTech Connect

    Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )

    1991-11-15

    The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.

  12. Radiometric correction of scatterometric wind measurements

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Use of a spaceborne scatterometer to determine the ocean-surface wind vector requires accurate measurement of radar backscatter from ocean. Such measurements are hindered by the effect of attenuation in the precipitating regions over sea. The attenuation can be estimated reasonably well with the knowledge of brightness temperatures observed by a microwave radiometer. The NASA SeaWinds scatterometer is to be flown on the Japanese ADEOS2. The AMSR multi-frequency radiometer on ADEOS2 will be used to correct errors due to attenuation in the SeaWinds scatterometer measurements. Here we investigate the errors in the attenuation corrections. Errors would be quite small if the radiometer and scatterometer footprints were identical and filled with uniform rain. However, the footprints are not identical, and because of their size one cannot expect uniform rain across each cell. Simulations were performed with the SeaWinds scatterometer (13.4 GHz) and AMSR (18.7 GHz) footprints with gradients of attenuation. The study shows that the resulting wind speed errors after correction (using the radiometer) are small for most cases. However, variations in the degree of overlap between the radiometer and scatterometer footprints affect the accuracy of the wind speed measurements.

  13. Efficient and Accurate Identification of Platinum-Group Minerals by a Combination of Mineral Liberation and Electron Probe Microanalysis with a New Approach to the Offline Overlap Correction of Platinum-Group Element Concentrations.

    PubMed

    Osbahr, Inga; Krause, Joachim; Bachmann, Kai; Gutzmer, Jens

    2015-10-01

    Identification and accurate characterization of platinum-group minerals (PGMs) is usually a very cumbersome procedure due to their small grain size (typically below 10 µm) and inconspicuous appearance under reflected light. A novel strategy for finding PGMs and quantifying their composition was developed. It combines a mineral liberation analyzer (MLA), a point logging system, and electron probe microanalysis (EPMA). As a first step, the PGMs are identified using the MLA. Grains identified as PGMs are then marked and coordinates recorded and transferred to the EPMA. Case studies illustrate that the combination of MLA, point logging, and EPMA results in the identification of a significantly higher number of PGM grains than reflected light microscopy. Analysis of PGMs by EPMA requires considerable effort due to the often significant overlaps between the X-ray spectra of almost all platinum-group and associated elements. X-ray lines suitable for quantitative analysis need to be carefully selected. As peak overlaps cannot be avoided completely, an offline overlap correction based on weight proportions has been developed. Results obtained with the procedure proposed in this study attain acceptable totals and atomic proportions, indicating that the applied corrections are appropriate.

  14. Transmission-less attenuation estimation from time-of-flight PET histo-images using consistency equations

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Defrise, Michel; Metzler, Scott D.; Matej, Samuel

    2015-08-01

    In positron emission tomography (PET) imaging, attenuation correction with accurate attenuation estimation is crucial for quantitative patient studies. Recent research showed that the attenuation sinogram can be determined up to a scaling constant utilizing the time-of-flight information. The TOF-PET data can be naturally and efficiently stored in a histo-image without information loss, and the radioactive tracer distribution can be efficiently reconstructed using the DIRECT approaches. In this paper, we explore transmission-less attenuation estimation from TOF-PET histo-images. We first present the TOF-PET histo-image formation and the consistency equations in the histo-image parameterization, then we derive a least-squares solution for estimating the directional derivatives of the attenuation factors from the measured emission histo-images. Finally, we present a fast solver to estimate the attenuation factors from their directional derivatives using the discrete sine transform and fast Fourier transform while considering the boundary conditions. We find that the attenuation histo-images can be uniquely determined from the TOF-PET histo-images by considering boundary conditions. Since the estimate of the attenuation directional derivatives can be inaccurate for LORs tangent to the patient boundary, external sources, e.g. a ring or annulus source, might be needed to give an accurate estimate of the attenuation gradient for such LORs. The attenuation estimation from TOF-PET emission histo-images is demonstrated using simulated 2D TOF-PET data.

  15. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    PubMed

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells. PMID:26917148

  16. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    PubMed

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells.

  17. Toward accurate thermochemistry of the {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH molecules at elevated temperatures: Corrections due to unbound states

    SciTech Connect

    Szidarovszky, Tamás; Császár, Attila G.

    2015-01-07

    The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  18. Use of Ga for mass bias correction for the accurate determination of copper isotope ratio in the NIST SRM 3114 Cu standard and geological samples by MC-ICP MS

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Zhou, L.; Tong, S.

    2015-12-01

    The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.

  19. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  20. Modeling transmission and scatter for photon beam attenuators.

    PubMed

    Ahnesjö, A; Weber, L; Nilsson, P

    1995-11-01

    The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop models for calculation of scatter and transmission for photon beam attenuators such as compensating filters, wedges, and block trays. The attenuation of the beam is calculated using a spectrum of the beam, and a correction factor based on attenuation measurements. Small angle coherent scatter and electron binding effects on scattering cross sections are considered by use of a correction factor. Quality changes in beam penetrability and energy fluence to dose conversion are modeled by use of the calculated primary beam spectrum after passage through the attenuator. The beam spectra are derived by the depth dose effective method, i.e., by minimizing the difference between measured and calculated depth dose distributions, where the calculated distributions are derived by superposing data from a database for monoenergetic photons. The attenuator scatter is integrated over the area viewed from the calculation point of view using first scatter theory. Calculations are simplified by replacing the energy and angular-dependent cross-section formulas with the forward scatter constant r2(0) and a set of parametrized correction functions. The set of corrections include functions for the Compton energy loss, scatter attenuation, and secondary bremsstrahlung production. The effect of charged particle contamination is bypassed by avoiding use of dmax for absolute dose calibrations. The results of the model are compared with scatter measurements in air for copper and lead filters and with dose to a water phantom for lead filters for 4 and 18 MV. For attenuated beams, downstream of the buildup region, the calculated results agree with measurements on the 1.5% level. The accuracy was slightly less in situations

  1. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  2. An analytical approach to quantitative reconstruction of non-uniform attenuated brain SPECT.

    PubMed

    Liang, Z; Ye, J; Harrington, D P

    1994-11-01

    An analytical approach to quantitative brain SPECT (single-photon-emission computed tomography) with non-uniform attenuation is developed. The approach formulates accurately the projection-transform equation as a summation of primary- and scatter-photon contributions. The scatter contribution can be estimated using the multiple-energy-window samples and removed from the primary-energy-window data by subtraction. The approach models the primary contribution as a convolution of the attenuated source and the detector-response kernel at a constant depth from the detector with the central-ray approximation. The attenuated Radon transform of the source can be efficiently deconvolved using the depth-frequency relation. The approach inverts exactly the attenuated Radon transform by Fourier transforms and series expansions. The performance of the analytical approach was studied for both uniform- and non-uniform-attenuation cases, and compared to the conventional FBP (filtered-backprojection) method by computer simulations. A patient brain X-ray image was acquired by a CT (computed-tomography) scanner and converted to the object-specific attenuation map for 140 keV energy. The mathematical Hoffman brain phantom was used to simulate the emission source and was resized such that it was completely surrounded by the skull of the CT attenuation map. The detector-response kernel was obtained from measurements of a point source at several depths in air from a parallel-hole collimator of a SPECT camera. The projection data were simulated from the object-specific attenuating source including the depth-dependent detector response. Quantitative improvement (>5%) in reconstructing the data was demonstrated with the nonuniform attenuation compensation, as compared to the uniform attenuation correction and the conventional FBP reconstruction. The commuting time was less than 5 min on an HP/730 desktop computer for an image array of 1282*32 from 128 projections of 128*32 size. PMID

  3. General relationships between ultrasonic attenuation and dispersion

    NASA Technical Reports Server (NTRS)

    Odonnell, M.; Jaynes, E. T.; Miller, J. G.

    1978-01-01

    General relationships between the ultrasonic attenuation and dispersion are presented. The validity of these nonlocal relationships hinges only on the properties of causality and linearity, and does not depend upon details of the mechanism responsible for the attenuation and dispersion. Approximate, nearly local relationships are presented and are demonstrated to predict accurately the ultrasonic dispersion in solutions of hemoglobin from the results of attenuation measurements.

  4. The Impact of Stochastic Attenuation on Photometric Redshift Estimates

    NASA Astrophysics Data System (ADS)

    Tepper-García, Thorsten; Fritze-von Alvensleben, Uta

    2007-05-01

    INTRODUCTION: We model the effect of the stochastic absorption by neutral hydrogen (HI) present in the intergalactic medium (IGM), such as Lyalpha Forest, and associated with galaxies (LLS, DLAs), on the photometric redshifts, and compare these results to the predicted photometric redshifts of models where only a mean attenuation is taken into account. METHODS: We model the attenuation due to HI along a random line of sight (LOS) using differential distribution functions constrained from observations (Kim et al. 97,01) in a Monte Carlo fashion (Bershady et al. 99). We then calculate galaxy model spectra of a given spectral type at different redshifts using our Evolutionary Synthesis Code GALEV (Bicker et al. 04), and apply to each spectrum a different attenuation corresponding to a particular random LOS. We obtain in this way an ensemble of attenuated spectral energy distributiond (SED) in the HST and Johnson systems. Using AnalySED (Anders et al. 06), an analysis tool based on a chi-square test, and our template SEDs with mean attenuation-which span a grid in redshift and spectral type-we determine to which extent the redshifts of our simulated spectra are recovered. RESULTS: We find a substantial underestimate of the photometric redshifts of up to Δz=0.3, especially in the range z > 3.0. DISCUSSION: Based on our results, we emphasise the need for the accurate modelling of the attenuation in order to correctly interpret, using evolutionary synthesis codes such as GALEV, the observations of (high-redshift) galaxies observed in deep surveys, for which only photometric information is available.

  5. Evaluation of QNI corrections in porous media applications

    NASA Astrophysics Data System (ADS)

    Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.

    2011-09-01

    Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.

  6. Improved Background Corrections for Uranium Holdup Measurements

    SciTech Connect

    Oberer, R.B.; Gunn, C.A.; Chiang, L.G.

    2004-06-21

    In the original Generalized Geometry Holdup (GGH) model, all holdup deposits were modeled as points, lines, and areas[1, 5]. Two improvements[4] were recently made to the GGH model and are currently in use at the Y-12 National Security Complex. These two improvements are the finite-source correction CF{sub g} and the self-attenuation correction. The finite-source correction corrects the average detector response for the width of point and line geometries which in effect, converts points and lines into areas. The result of a holdup measurement of an area deposit is a density-thickness which is converted to mass by multiplying it by the area of the deposit. From the measured density-thickness, the true density-thickness can be calculated by correcting for the material self-attenuation. Therefore the self-attenuation correction is applied to finite point and line deposits as well as areas. This report demonstrates that the finite-source and self-attenuation corrections also provide a means to better separate the gamma rays emitted by the material from the gamma rays emitted by background sources for an improved background correction. Currently, the measured background radiation is attenuated for equipment walls in the case of area deposits but not for line and point sources. The measured background radiation is not corrected for attenuation by the uranium material. For all of these cases, the background is overestimated which causes a negative bias in the measurement. The finite-source correction and the self-attenuation correction will allow the correction of the measured background radiation for both the equipment attenuation and material attenuation for area sources as well as point and line sources.

  7. High-Precision Tungsten Isotopic Analysis by Multicollection Negative Thermal Ionization Mass Spectrometry Based on Simultaneous Measurement of W and (18)O/(16)O Isotope Ratios for Accurate Fractionation Correction.

    PubMed

    Trinquier, Anne; Touboul, Mathieu; Walker, Richard J

    2016-02-01

    Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.

  8. Political Correctness--Correct?

    ERIC Educational Resources Information Center

    Boase, Paul H.

    1993-01-01

    Examines the phenomenon of political correctness, its roots and objectives, and its successes and failures in coping with the conflicts and clashes of multicultural campuses. Argues that speech codes indicate failure in academia's primary mission to civilize and educate through talk, discussion, thought,166 and persuasion. (SR)

  9. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2015-08-01

    The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in

  10. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  11. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    SciTech Connect

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  12. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2016-02-01

    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  13. Global Attenuation Model of the Upper Mantle

    NASA Astrophysics Data System (ADS)

    Adenis, A.; Debayle, E.; Ricard, Y. R.

    2015-12-01

    We present a three-dimensional shear attenuation model based on a massive surface wave data-set (372,629 Rayleigh waveforms analysed in the period range 50-300s by Debayle and Ricard, 2012). For each seismogram, this approach yields depth-dependent path average models of shear velocity and quality factor, and a set of fundamental and higher-mode dispersion and attenuation curves. We combine these attenuation measurements in a tomographic inversion after a careful rejection of the noisy data. We first remove data likely to be biased by a poor knowledge of the source. Then we assume that waves corresponding to events having close epicenters and recorded at the same station sample the same elastic and anelastic structure, we cluster the corresponding rays and average the attenuation measurements. Logarithms of the attenuations are regionalized using the non-linear east square formalism of Tarantola and Valette (1982), resulting in attenuation tomographic maps between 50s and 300s. After a first inversion, outlyers are rejected and a second inversion yields a moderate variance reduction of about 20%. We correct the attenuation curves for focusing effect using the linearized ray theory of Woodhouse and Wong (1986). Accounting for focussing effects allows building tomographic maps with variance reductions reaching 40%. In the period range 120-200s, the root mean square of the model perturbations increases from about 5% to 20%. Our 3-D attenuation models present strong agreement with surface tectonics at period lower than 200s. Areas of low attenuation are located under continents and areas of high attenuation are associated with oceans. Surprisingly, although mid oceanic ridges are located in attenuating regions, their signature, even if enhanced by focusing corrections, remains weaker than in the shear velocity models. Synthetic tests suggests that regularisation contributes to damp the attenuation signature of ridges, which could therefore be underestimated.

  14. Characterizing Ultraviolet and Infrared Observational Properties for Galaxies. II. Features of Attenuation Law

    NASA Astrophysics Data System (ADS)

    Mao, Ye-Wei; Kong, Xu; Lin, Lin

    2014-07-01

    Variations in the attenuation law have a significant impact on observed spectral energy distributions for galaxies. As one important observational property for galaxies at ultraviolet and infrared wavelength bands, the correlation between infrared-to-ultraviolet luminosity ratio and ultraviolet color index (or ultraviolet spectral slope), i.e., the IRX-UV relation (or IRX-β relation), offered a widely used formula for correcting dust attenuation in galaxies, but the usability appears to be in doubt now because of considerable dispersion in this relation found by many studies. In this paper, on the basis of spectral synthesis modeling and spatially resolved measurements of four nearby spiral galaxies, we provide an interpretation of the deviation in the IRX-UV relation with variations in the attenuation law. From both theoretical and observational viewpoints, two components in the attenuation curve, the linear background and the 2175 Å bump, are suggested to be the parameters in addition to the stellar population age (addressed in the first paper of this series) in the IRX-UV function; different features in the attenuation curve are diagnosed for the galaxies in our sample. Nevertheless, it is often difficult to ascertain the attenuation law for galaxies in actual observations. Possible reasons for preventing the successful detection of the parameters in the attenuation curve are also discussed in this paper, including the degeneracy of the linear background and the 2175 Å bump in observational channels, the requirement for young and dust-rich systems to study, and the difficulty in accurate estimates of dust attenuations at different wavelength bands.

  15. Characterizing ultraviolet and infrared observational properties for galaxies. II. Features of attenuation law

    SciTech Connect

    Mao, Ye-Wei; Kong, Xu; Lin, Lin E-mail: xkong@ustc.edu.cn

    2014-07-01

    Variations in the attenuation law have a significant impact on observed spectral energy distributions for galaxies. As one important observational property for galaxies at ultraviolet and infrared wavelength bands, the correlation between infrared-to-ultraviolet luminosity ratio and ultraviolet color index (or ultraviolet spectral slope), i.e., the IRX-UV relation (or IRX-β relation), offered a widely used formula for correcting dust attenuation in galaxies, but the usability appears to be in doubt now because of considerable dispersion in this relation found by many studies. In this paper, on the basis of spectral synthesis modeling and spatially resolved measurements of four nearby spiral galaxies, we provide an interpretation of the deviation in the IRX-UV relation with variations in the attenuation law. From both theoretical and observational viewpoints, two components in the attenuation curve, the linear background and the 2175 Å bump, are suggested to be the parameters in addition to the stellar population age (addressed in the first paper of this series) in the IRX-UV function; different features in the attenuation curve are diagnosed for the galaxies in our sample. Nevertheless, it is often difficult to ascertain the attenuation law for galaxies in actual observations. Possible reasons for preventing the successful detection of the parameters in the attenuation curve are also discussed in this paper, including the degeneracy of the linear background and the 2175 Å bump in observational channels, the requirement for young and dust-rich systems to study, and the difficulty in accurate estimates of dust attenuations at different wavelength bands.

  16. DC attenuation meter

    DOEpatents

    Hargrove, Douglas L.

    2004-09-14

    A portable, hand-held meter used to measure direct current (DC) attenuation in low impedance electrical signal cables and signal attenuators. A DC voltage is applied to the signal input of the cable and feedback to the control circuit through the signal cable and attenuators. The control circuit adjusts the applied voltage to the cable until the feedback voltage equals the reference voltage. The "units" of applied voltage required at the cable input is the system attenuation value of the cable and attenuators, which makes this meter unique. The meter may be used to calibrate data signal cables, attenuators, and cable-attenuator assemblies.

  17. Assimilation of attenuated data from X-band network radars using ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Cheng, Jing

    To use reflectivity data from X-band radars for quantitative precipitation estimation and storm-scale data assimilation, the effect of attenuation must be properly accounted for. Traditional approaches try to make correction to the attenuated reflectivity first before using the data. An alternative, theoretically more attractive approach builds the attenuation effect into the reflectivity observation operator of a data assimilation system, such as an ensemble Kalman filter (EnKF), allowing direct assimilation of the attenuated reflectivity and taking advantage of microphysical state estimation using EnKF methods for a potentially more accurate solution. This study first tests the approach for the CASA (Center for Collaborative Adaptive Sensing of the Atmosphere) X-band radar network configuration through observing system simulation experiments (OSSE) for a quasi-linear convective system (QLCS) that has more significant attenuation than isolated storms. To avoid the problem of potentially giving too much weight to fully attenuated reflectivity, an analytical, echo-intensity-dependent model for the observation error (AEM) is developed and is found to improve the performance of the filter. By building the attenuation into the forward observation operator and combining it with the application of AEM, the assimilation of attenuated CASA observations is able to produce a reasonably accurate analysis of the QLCS inside CASA radar network coverage. Compared with foregoing assimilation of radar data with weak radar reflectivity or assimilating only radial velocity data, our method can suppress the growth of spurious echoes while obtaining a more accurate analysis in the terms of root-mean-square (RMS) error. Sensitivity experiments are designed to examine the effectiveness of AEM by introducing multiple sources of observation errors into the simulated observations. The performance of such an approach in the presence of resolution-induced model error is also evaluated and

  18. Novel approach for the Monte Carlo calculation of free-air chamber correction factors.

    PubMed

    Mainegra-Hing, Ernesto; Reynaert, Nick; Kawrakow, Iwan

    2008-08-01

    A self-consistent approach for the Monte Carlo calculation of free-air chamber (FAC) correction factors needed to convert the chamber reading into the quantity air-kerma at the point of measurement is introduced, and its implementation in the new EGSnrc user code egs_fac is discussed. To validate the method, comparisons between computed and measured FAC correction factors for attenuation Ax, scatter (Ascat), and electron loss (Aeloss) are performed in the medium energy range where the experimental determination is believed to be accurate. The Monte Carlo calculations utilize a full simulation of the x-ray tube with BEAMnrc and a detailed model of the parallel-plate FAC. Excellent agreement between the computed Ascat and Aeloss and the measured values for these correction factors currently used in the National Research Council (NRC) of Canada primary FAC standard is observed. Our simulations also agree with previous Monte Carlo results for Ascat and Aeloss for the 135 and 250 kVp Consultative Committee for Ionizing Radiation reference beam qualities. The computed attenuation correction agrees with the measured Aatt within the stated uncertainties, although the authors' simulations demonstrate that the evacuated-tube technique employed at NRC to measure the attenuation correction slightly overestimates Aatt in the medium energy range. The newly introduced corrections for backscatter, beam geometry, and lack of charged particle equilibrium along the beam axis are found to be negligible. On the other hand, the correction for photons leaking through the FAC aperture, currently ignored in the NRC standard, is shown to be significant.

  19. Pressure surge attenuator

    DOEpatents

    Christie, Alan M.; Snyder, Kurt I.

    1985-01-01

    A pressure surge attenuation system for pipes having a fluted region opposite crushable metal foam. As adapted for nuclear reactor vessels and heads, crushable metal foam is disposed to attenuate pressure surges.

  20. Tracer attenuation in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Vladimir

    2011-12-01

    The self-purifying capacity of aquifers strongly depends on the attenuation of waterborne contaminants, i.e., irreversible loss of contaminant mass on a given scale as a result of coupled transport and transformation processes. A general formulation of tracer attenuation in groundwater is presented. Basic sensitivities of attenuation to macrodispersion and retention are illustrated for a few typical retention mechanisms. Tracer recovery is suggested as an experimental proxy for attenuation. Unique experimental data of tracer recovery in crystalline rock compare favorably with the theoretical model that is based on diffusion-controlled retention. Non-Fickian hydrodynamic transport has potentially a large impact on field-scale attenuation of dissolved contaminants.

  1. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Attenuation Tomography of the Upper Mantle

    NASA Astrophysics Data System (ADS)

    Adenis, A.; Debayle, E.; Ricard, Y. R.

    2014-12-01

    We present a 3-D model of surface wave attenuation in the upper mantle. The model is constrained by a large data set of fundamental and higher Rayleigh mode observations. This data set consists of about 1,800,000 attenuation curves measured in the period range 50-300s by Debayle and Ricard (2012). A careful selection allows us to reject data for which measurements are likely biased by the poor knowledge of the scalar seismic moment or by a ray propagation too close to a node of the source radiation pattern. For each epicenter-station path, elastic focusing effects due to seismic heterogeneities are corrected using DR2012 and the data are turned into log(1/Q). The selected data are then combined in a tomographic inversion using the non-linear least square formalism of Tarantola and Valette (1982). The obtained attenuation maps are in agreement with the surface tectonic for periods and modes sensitive to the top 200km of the upper mantle. Low attenuation regions correlate with continental shields while high attenuation regions are located beneath young oceanic regions. The attenuation pattern becomes more homogeneous at depths greater than 200 km and the maps are dominated by a high quality factor signature beneath slabs. We will discuss the similarities and differences between the tomographies of seismic velocities and of attenuations.

  4. Iterative Beam Hardening Correction for Multi-Material Objects.

    PubMed

    Zhao, Yunsong; Li, Mengfei

    2015-01-01

    In this paper, we propose an iterative beam hardening correction method that is applicable for the case with multiple materials. By assuming that the materials composing scanned object are known and that they are distinguishable by their linear attenuation coefficients at some given energy, the beam hardening correction problem is converted into a nonlinear system problem, which is then solved iteratively. The reconstructed image is the distribution of linear attenuation coefficient of the scanned object at a given energy. So there are no beam hardening artifacts in the image theoretically. The proposed iterative scheme combines an accurate polychromatic forward projection with a linearized backprojection. Both forward projection and backprojection have high degree of parallelism, and are suitable for acceleration on parallel systems. Numerical experiments with both simulated data and real data verifies the validity of the proposed method. The beam hardening artifacts are alleviated effectively. In addition, the proposed method has a good tolerance on the error of the estimated x-ray spectrum.

  5. Aerosol effects and corrections in the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Hervig, Mark E.; Russell, James M., III; Gordley, Larry L.; Daniels, John; Drayson, S. Roland; Park, Jae H.

    1995-01-01

    The eruptions of Mt. Pinatubo in June 1991 increased stratospheric aerosol loading by a factor of 30, affecting chemistry, radiative transfer, and remote measurements of the stratosphere. The Halogen Occultation Experiment (HALOE) instrument on board Upper Atmosphere Research Satellite (UARS) makes measurements globally for inferring profiles of NO2, H2O, O3, HF, HCl, CH4, NO, and temperature in addition to aerosol extinction at five wavelengths. Understanding and removing the aerosol extinction is essential for obtaining accurate retrievals from the radiometer channels of NO2, H2O and O3 in the lower stratosphere since these measurements are severely affected by contaminant aerosol absorption. If ignored, aerosol absorption in the radiometer measurements is interpreted as additional absorption by the target gas, resulting in anomalously large mixing ratios. To correct the radiometer measurements for aerosol effects, a retrieved aerosol extinction profile is extrapolated to the radiometer wavelengths and then included as continuum attenuation. The sensitivity of the extrapolation to size distribution and composition is small for certain wavelength combinations, reducing the correction uncertainty. The aerosol corrections extend the usable range of profiles retrieved from the radiometer channels to the tropopause with results that agree well with correlative measurements. In situations of heavy aerosol loading, errors due to aerosol in the retrieved mixing ratios are reduced to values of about 15, 25, and 60% in H2O, O3, and NO2, respectively, levels that are much less than the correction magnitude.

  6. Variable laser attenuator

    DOEpatents

    Foltyn, S.R.

    1987-05-29

    The disclosure relates to low loss, high power variable attenuators comprising one or more transmissive and/or reflective multilayer dielectric filters. The attenuator is particularly suitable to use with unpolarized lasers such as excimer lasers. Beam attenuation is a function of beam polarization and the angle of incidence between the beam and the filter and is controlled by adjusting the angle of incidence the beam makes to the filter or filters. Filters are selected in accordance with beam wavelength. 9 figs.

  7. Variable laser attenuator

    DOEpatents

    Foltyn, Stephen R.

    1988-01-01

    The disclosure relates to low loss, high power variable attenuators comprng one or more transmissive and/or reflective multilayer dielectric filters. The attenuator is particularly suitable to use with unpolarized lasers such as excimer lasers. Beam attenuation is a function of beam polarization and the angle of incidence between the beam and the filter and is controlled by adjusting the angle of incidence the beam makes to the filter or filters. Filters are selected in accordance with beam wavelength.

  8. Electroweak Corrections

    NASA Astrophysics Data System (ADS)

    Barbieri, Riccardo

    2016-10-01

    The test of the electroweak corrections has played a major role in providing evidence for the gauge and the Higgs sectors of the Standard Model. At the same time the consideration of the electroweak corrections has given significant indirect information on the masses of the top and the Higgs boson before their discoveries and important orientation/constraints on the searches for new physics, still highly valuable in the present situation. The progression of these contributions is reviewed.

  9. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  10. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  11. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  12. Attenuation of laser generated ultrasound in steel at high temperatures; comparison of theory and experimental measurements.

    PubMed

    Kube, Christopher M

    2016-08-01

    This article reexamines some recently published laser ultrasound measurements of the longitudinal attenuation coefficient obtained during annealing of two steel samples (DP600 and S550). Theoretical attenuation models based on perturbation theory are compared to these experimental measurements. It is observed that the Rayleigh attenuation formulas provide the correct qualitative agreement, but overestimate the experimental values. The more general theoretical attenuation model considered here demonstrates strong quantitative agreement, which highlights the applicability of the model during real-time metal processing.

  13. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  14. Quantitative fully 3D PET via model-based scatter correction

    SciTech Connect

    Ollinger, J.M.

    1994-05-01

    We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.

  15. Landing gear noise attenuation

    NASA Technical Reports Server (NTRS)

    Moe, Jeffrey W. (Inventor); Whitmire, Julia (Inventor); Kwan, Hwa-Wan (Inventor); Abeysinghe, Amal (Inventor)

    2011-01-01

    A landing gear noise attenuator mitigates noise generated by airframe deployable landing gear. The noise attenuator can have a first position when the landing gear is in its deployed or down position, and a second position when the landing gear is in its up or stowed position. The noise attenuator may be an inflatable fairing that does not compromise limited space constraints associated with landing gear retraction and stowage. A truck fairing mounted under a truck beam can have a compliant edge to allow for non-destructive impingement of a deflected fire during certain conditions.

  16. RADIO FREQUENCY ATTENUATOR

    DOEpatents

    Giordano, S.

    1963-11-12

    A high peak power level r-f attenuator that is readily and easily insertable along a coaxial cable having an inner conductor and an outer annular conductor without breaking the ends thereof is presented. Spaced first and second flares in the outer conductor face each other with a slidable cylindrical outer conductor portion therebetween. Dielectric means, such as water, contact the cable between the flares to attenuate the radio-frequency energy received thereby. The cylindrical outer conductor portion is slidable to adjust the voltage standing wave ratio to a low level, and one of the flares is slidable to adjust the attenuation level. An integral dielectric container is also provided. (AFC)

  17. GPR measurements of attenuation in concrete

    SciTech Connect

    Eisenmann, David Margetan, Frank J. Pavel, Brittney

    2015-03-31

    Ground-penetrating radar (GPR) signals from concrete structures are affected by several phenomenon, including: (1) transmission and reflection coefficients at interfaces; (2) the radiation patterns of the antenna(s) being used; and (3) the material properties of concrete and any embedded objects. In this paper we investigate different schemes for determining the electromagnetic (EM) attenuation of concrete from measured signals obtained using commercially-available GPR equipment. We adapt procedures commonly used in ultrasonic inspections where one compares the relative strengths of two or more signals having different travel paths through the material of interest. After correcting for beam spread (i.e., diffraction), interface phenomena, and equipment amplification settings, any remaining signal differences are assumed to be due to attenuation thus allowing the attenuation coefficient (say, in dB of loss per inch of travel) to be estimated. We begin with a brief overview of our approach, and then discuss how diffraction corrections were determined for our two 1.6 GHz GPR antennas. We then present results of attenuation measurements for two types of concrete using both pulse/echo and pitch/catch measurement setups.

  18. Atmospheric extinction in solar tower plants: the Absorption and Broadband Correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-05-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional

  19. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-12-01

    We have developed and tested technology for a new type of direct hydrocarbon detection. The method uses inelastic rock properties to greatly enhance the sensitivity of surface seismic methods to the presence of oil and gas saturation. These methods include use of energy absorption, dispersion, and attenuation (Q) along with traditional seismic attributes like velocity, impedance, and AVO. Our approach is to combine three elements: (1) a synthesis of the latest rock physics understanding of how rock inelasticity is related to rock type, pore fluid types, and pore microstructure, (2) synthetic seismic modeling that will help identify the relative contributions of scattering and intrinsic inelasticity to apparent Q attributes, and (3) robust algorithms that extract relative wave attenuation attributes from seismic data. This project provides: (1) Additional petrophysical insight from acquired data; (2) Increased understanding of rock and fluid properties; (3) New techniques to measure reservoir properties that are not currently available; and (4) Provide tools to more accurately describe the reservoir and predict oil location and volumes. These methodologies will improve the industry's ability to predict and quantify oil and gas saturation distribution, and to apply this information through geologic models to enhance reservoir simulation. We have applied for two separate patents relating to work that was completed as part of this project.

  20. Dust Attenuation in Lyman Break Galaxies

    NASA Astrophysics Data System (ADS)

    Vijh, U. P.; Witt, A. N.; Gordon, K. D.

    2002-05-01

    In order to determine the star formation history of the universe from deep surveys at UV/optical rest frame wavelengths, one must have a reliable estimate of the attenuation factor for galaxies at high redshifts. That star formation is heavily enshrouded in dust is no longer in doubt. The exact nature, geometry and the amount of this dust/attenuation needs to be known out to high redshifts. We present an analysis of UV attenuation of a large (N=906) sample of Lyman Break Galaxies (LBGs) (data provided by Charles C. Steidel, Caltech) by internal dust. Using spectral energy distributions (SEDs) from the PEGASE stellar evolutionary synthesis model we apply dust corrections to the G - R colours using the Witt & Gordon (2000) dust attenuation models, to arrive at the UV attenuation factors. We show that the dust in the LBG sample exhibits SMC-like characteristics rather than MW type, and that the dust geometry is best represented by a clumpy shell configuration. The dust attenuation in individual LBGs is found to be proportional to their rest frame UV luminosities, i.e. their current star formation rate. We find that the average luminosity-weighted dust attenuation factor at 1600 Å is in the range 10-40 which agrees with the upper limit set by the FIR background. We also find that most of the star formation at 2 < z < 4 occurs in galaxies with luminosity ~ 1011-1012Lsun, equivalent to of the present day Luminous Infra-Red Galaxies and the Ultra Luminous Infra-Red Galaxies. This work has been supported by NASA grants NAG5-9376 and NAG5-9202, which we acknowledge with gratitude.

  1. Accurate hydrogen depth profiling by reflection elastic recoil detection analysis

    SciTech Connect

    Verda, R. D.; Tesmer, Joseph R.; Nastasi, Michael Anthony,; Bower, R. W.

    2001-01-01

    A technique to convert reflection elastic recoil detection analysis spectra to depth profiles, the channel-depth conversion, was introduced by Verda, et al [1]. But the channel-depth conversion does not correct for energy spread, the unwanted broadening in the energy of the spectra, which can lead to errors in depth profiling. A work in progress introduces a technique that corrects for energy spread in elastic recoil detection analysis spectra, the energy spread correction [2]. Together, the energy spread correction and the channel-depth conversion comprise an accurate and convenient hydrogen depth profiling method.

  2. Timebias corrections to predictions

    NASA Technical Reports Server (NTRS)

    Wood, Roger; Gibbs, Philip

    1993-01-01

    The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.

  3. A rigid motion correction method for helical computed tomography (CT)

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  4. A method to correct for spectral artifacts in optical-CT dosimetry

    PubMed Central

    Pierquet, Michael; Jordan, Kevin; Oldham, Mark

    2011-01-01

    The recent emergence of radiochromic dosimeters with low inherent light-scattering presents the possibility of fast 3D dosimetry using broad-beam optical computed tomography (optical-CT). Current broad beam scanners typically employ either a single or a planar array of light-emitting diodes (LED) for the light source. The spectrum of light from LED sources is polychromatic and this, in combination with the non-uniform spectral absorption of the dosimeter, can introduce spectral artifacts arising from preferential absorption of photons at the peak absorption wavelengths in the dosimeter. Spectral artifacts can lead to large errors in the reconstructed attenuation coefficients, and hence dose measurement. This work presents an analytic method for correcting for spectral artifacts which can be applied if the spectral characteristics of the light source, absorbing dosimeter, and imaging detector are known or can be measured. The method is implemented here for a PRESAGE® dosimeter scanned with the DLOS telecentric scanner (Duke Large field-of-view Optical-CT Scanner). Emission and absorption profiles were measured with a commercial spectrometer and spectrophotometer, respectively. Simulations are presented that show spectral changes can introduce errors of 8% for moderately attenuating samples where spectral artifacts are less pronounced. The correction is evaluated by application to a 16 cm diameter PRESAGE® cylindrical dosimeter irradiated along the axis with two partially overlapping 6 × 6 cm fields of different doses. The resulting stepped dose distribution facilitates evaluation of the correction as each step had different spectral contributions. The spectral artifact correction was found to accurately correct the reconstructed coefficients to within ~1.5%, improved from ~7.5%, for normalized dose distributions. In conclusion, for situations where spectral artifacts cannot be removed by physical filters, the method shown here is an effective correction. Physical

  5. Flat panel X-ray detector with reduced internal scattering for improved attenuation accuracy and dynamic range

    DOEpatents

    Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.

    2010-10-12

    An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.

  6. Atmospheric Corrections in Coastal Altimetry

    NASA Astrophysics Data System (ADS)

    Antonita, Maria; Kumar, Raj

    2012-07-01

    The range measurements from the altimeter are associated with a large number of geophysical corrections which needs special attention near coasts and the shallow water regions. The corrections due to ionosphere, dry and wet troposphere and that due to sea state are of primary importance in altimetry. Water vapor dominates the wet tropospheric corrections by several factors which is more complex with higher spatio-temporal variations and thus needs a careful attention near coasts. In addition to this rain is one of the major atmospheric phenomena which attenuate the backscatter altimeter measurements which in turn affect the altimeter derived wind and wave measurements. Thus during rain events utmost care should be taken while deriving the altimeter wind speeds and wave heights. The first objective of the present study involves the comparison of the water vapor corrections estimated from radiosonde measurements near the coastal regions with the model estimated corrections applied in the altimeter range measurements. Analysis has been performed for the Coastal Altimeter products provided by the PISTACH to observe these corrections. The second objective is to estimate the rain rate using altimeter backscatter measurements. The differential attenuation of KU band over C band due to rain has been utilized to identify the rain events and to estimate the amount of rain fall. JASON-2 altimeter data during two tropical cyclonic events over Bay of Bengal have been used for this purpose. An attempt is made to compare the estimated rain rate from altimeter measurements with the other available collocated satellite observations like KALPANA and TRMM-TMI. The results are encouraging and can be used to provide valid rain flags in the altimeter products in addition to the radiometer rain flags.

  7. Feed-forward digital phase and amplitude correction system

    DOEpatents

    Yu, David U. L.; Conway, Patrick H.

    1994-01-01

    Phase and amplitude modifications in repeatable RF pulses at the output of a high power pulsed microwave amplifier are made utilizing a digital feed-forward correction system. A controlled amount of the output power is coupled to a correction system for processing of phase and amplitude information. The correction system comprises circuitry to compare the detected phase and amplitude with the desired phase and amplitude, respectively, and a digitally programmable phase shifter and attenuator and digital logic circuitry to control the phase shifter and attenuator. The Phase and amplitude of subsequent are modified by output signals from the correction system.

  8. Feed-forward digital phase and amplitude correction system

    DOEpatents

    Yu, D.U.L.; Conway, P.H.

    1994-11-15

    Phase and amplitude modifications in repeatable RF pulses at the output of a high power pulsed microwave amplifier are made utilizing a digital feed-forward correction system. A controlled amount of the output power is coupled to a correction system for processing of phase and amplitude information. The correction system comprises circuitry to compare the detected phase and amplitude with the desired phase and amplitude, respectively, and a digitally programmable phase shifter and attenuator and digital logic circuitry to control the phase shifter and attenuator. The phase and amplitude of subsequent are modified by output signals from the correction system. 11 figs.

  9. Robust diffraction correction method for high-frequency ultrasonic tissue characterization

    NASA Astrophysics Data System (ADS)

    Raju, Balasundar

    2001-05-01

    The computation of quantitative ultrasonic parameters such as the attenuation or backscatter coefficient requires compensation for diffraction effects. In this work a simple and accurate diffraction correction method for skin characterization requiring only a single focal zone is developed. The advantage of this method is that the transducer need not be mechanically repositioned to collect data from several focal zones, thereby reducing the time of imaging and preventing motion artifacts. Data were first collected under controlled conditions from skin of volunteers using a high-frequency system (center frequency=33 MHz, BW=28 MHz) at 19 focal zones through axial translation. Using these data, mean backscatter power spectra were computed as a function of the distance between the transducer and the tissue, which then served as empirical diffraction correction curves for subsequent data. The method was demonstrated on patients patch-tested for contact dermatitis. The computed attenuation coefficient slope was significantly (p<0.05) lower at the affected site (0.13+/-0.02 dB/mm/MHz) compared to nearby normal skin (0.2+/-0.05 dB/mm/MHz). The mean backscatter level was also significantly lower at the affected site (6.7+/-2.1 in arbitrary units) compared to normal skin (11.3+/-3.2). These results show diffraction corrected ultrasonic parameters can differentiate normal from affected skin tissues.

  10. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  11. Planetary Ices Attenuation Properties

    NASA Astrophysics Data System (ADS)

    McCarthy, Christine; Castillo-Rogez, Julie C.

    In this chapter, we review the topic of energy dissipation in the context of icy satellites experiencing tidal forcing. We describe the physics of mechanical dissipation, also known as attenuation, in polycrystalline ice and discuss the history of laboratory methods used to measure and understand it. Because many factors - such as microstructure, composition and defect state - can influence rheological behavior, we review what is known about the mechanisms responsible for attenuation in ice and what can be inferred from the properties of rocks, metals and ceramics. Since attenuation measured in the laboratory must be carefully scaled to geologic time and to planetary conditions in order to provide realistic extrapolation, we discuss various mechanical models that have been used, with varying degrees of success, to describe attenuation as a function of forcing frequency and temperature. We review the literature in which these models have been used to describe dissipation in the moons of Jupiter and Saturn. Finally, we address gaps in our present knowledge of planetary ice attenuation and provide suggestions for future inquiry.

  12. Asymmetric scatter kernels for software-based scatter correction of gridless mammography

    NASA Astrophysics Data System (ADS)

    Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh

    2015-03-01

    Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.

  13. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404

  14. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.

  15. Vortex attenuation flight experiments

    NASA Technical Reports Server (NTRS)

    Barber, M. R.; Hastings, E. C., Jr.; Champine, R. A.; Tymczyszyn, J. J.

    1977-01-01

    Flight tests evaluating the effects of altered span loading, turbulence ingestion, combinations of mass and turbulence ingestion, and combinations of altered span loading turbulance ingestion on trailed wake vortex attenuation were conducted. Span loadings were altered in flight by varying the deflections of the inboard and outboard flaps on a B-747 aircraft. Turbulence ingestion was achieved in flight by mounting splines on a C-54G aircraft. Mass and turbulence ingestion was achieved in flight by varying the thrust on the B-747 aircraft. Combinations of altered span loading and turbulence ingestion were achieved in flight by installing a spoiler on a CV-990 aircraft and by deflecting the existing spoilers on a B-747 aircraft. The characteristics of the attenuated and unattenuated vortexes were determined by probing them with smaller aircraft. Acceptable separation distances for encounters with the attenuated and unattenuated vortexes are presented.

  16. Radiofrequency attenuator and method

    SciTech Connect

    Warner, Benjamin P.; McCleskey, T. Mark; Burrell, Anthony K.; Agrawal, Anoop; Hall, Simon B.

    2009-01-20

    Radiofrequency attenuator and method. The attenuator includes a pair of transparent windows. A chamber between the windows is filled with molten salt. Preferred molten salts include quarternary ammonium cations and fluorine-containing anions such as tetrafluoroborate (BF.sub.4.sup.-), hexafluorophosphate (PF.sub.6.sup.-), hexafluoroarsenate (AsF.sub.6.sup.-), trifluoromethylsulfonate (CF.sub.3SO.sub.3.sup.-), bis(trifluoromethylsulfonyl)imide ((CF.sub.3SO.sub.2).sub.2N.sup.-), bis(perfluoroethylsulfonyl)imide ((CF.sub.3CF.sub.2SO.sub.2).sub.2N.sup.-) and tris(trifluoromethylsulfonyl)methide ((CF.sub.3SO.sub.2).sub.3C.sup.-). Radicals or radical cations may be added to or electrochemically generated in the molten salt to enhance the RF attenuation.

  17. Radiofrequency attenuator and method

    DOEpatents

    Warner, Benjamin P.; McCleskey, T. Mark; Burrell, Anthony K.; Agrawal, Anoop; Hall, Simon B.

    2009-11-10

    Radiofrequency attenuator and method. The attenuator includes a pair of transparent windows. A chamber between the windows is filled with molten salt. Preferred molten salts include quarternary ammonium cations and fluorine-containing anions such as tetrafluoroborate (BF.sub.4.sup.-), hexafluorophosphate (PF.sub.6.sup.-), hexafluoroarsenate (AsF.sub.6.sup.-), trifluoromethylsulfonate (CF.sub.3SO.sub.3.sup.-), bis(trifluoromethylsulfonyl)imide ((CF.sub.3SO.sub.2).sub.2N.sup.-), bis(perfluoroethylsulfonyl)imide ((CF.sub.3CF.sub.2SO.sub.2).sub.2N.sup.-) and tris(trifluoromethylsulfonyl)methide ((CF.sub.3SO.sub.2).sub.3 C.sup.-). Radicals or radical cations may be added to or electrochemically generated in the molten salt to enhance the RF attenuation.

  18. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    SciTech Connect

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  19. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  20. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  1. Tritium Attenuation by Distillation

    SciTech Connect

    Wittman, N.E.

    2001-07-31

    The objective of this study was to determine how a 100 Area distillation system could be used to reduce to a satisfactory low value the tritium content of the dilute moderator produced in the 100 Area stills, and whether such a tritium attenuator would have sufficient capacity to process all this material before it is sent to the 400 Area for reprocessing.

  2. Dead-time Corrected Disdrometer Data

    DOE Data Explorer

    Bartholomew, Mary Jane

    2008-03-05

    Original and dead-time corrected disdrometer results for observations made at SGP and TWP. The correction is based on the technique discussed in Sheppard and Joe, 1994. In addition, these files contain calculated radar reflectivity factor, mean Doppler velocity and attenuation for every measurement for both the original and dead-time corrected data at the following wavelengths: 0.316, 0.856, 3.2, 5, and 10cm (W,K,X,C,S bands). Pavlos Kollias provided the code to do these calculations.

  3. Imaging Rayleigh Wave Attenuation Beneath North America with USArray

    NASA Astrophysics Data System (ADS)

    Dalton, C. A.; Bao, X.; Jin, G.; Gaherty, J. B.

    2015-12-01

    first whole-continent attenuation model constructed from USArray data; we conclude that future improvements to this model will result from more accurate computation of focusing effects using simulations rather than the observed travel times.

  4. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  5. Radioactive smart probe for potential corrected matrix metalloproteinase imaging.

    PubMed

    Huang, Chiun-Wei; Li, Zibo; Conti, Peter S

    2012-11-21

    Although various activatable optical probes have been developed to visualize metalloproteinase (MMP) activities in vivo, precise quantification of the enzyme activity is limited due to the inherent scattering and attenuation (limited depth penetration) properties of optical imaging. In this investigation, a novel activatable peptide probe (64)Cu-BBQ650-PLGVR-K(Cy5.5)-E-K(DOTA)-OH was constructed to detect tumor MMP activity in vivo. This agent is optically quenched in its native form, but releases strong fluorescence upon cleavage by selected enzymes. MMP specificity was confirmed both in vitro and in vivo by fluorescent imaging studies. The use of a single modality to image biomarkers/processes may lead to erroneous interpretation of imaging data. The introduction of a quantitative imaging modality, such as PET, would make it feasible to correct the enzyme activity determined from optical imaging. In this proof of principle report, we demonstrated the feasibility of correcting the activatable optical imaging data through the PET signal. This approach provides an attractive new strategy for accurate imaging of MMP activity, which may also be applied for other protease imaging. PMID:23025637

  6. Model based scatter correction for cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Wiegert, Jens; Bertram, Matthias; Rose, Georg; Aach, Til

    2005-04-01

    Scattered radiation is a major source of image degradation and nonlinearity in flat detector based cone-beam CT. Due to the bigger irradiated volume the amount of scattered radiation in true cone-beam geometry is considerably higher than for fan beam CT. This on the one hand reduces the signal to noise ratio, since the additional scattered photons contribute only to the noise and not to the measured signal, and on the other hand cupping and streak artifacts arise in the reconstructed volume. Anti-scatter grids composed of lead lamellae and interspacing material decrease the SNR for flat detector based CB-CT geometry, because the beneficial scatter attenuating effect is overcompensated by the absorption of primary radiation. Additionally, due to the high amount of scatter that still remains behind the grid, cupping and streak artifacts cannot be reduced sufficiently. Computerized scatter correction schemes are therefore essential for achieving artifact-free reconstructed images in cone-beam CT. In this work, a fast model based scatter correction algorithm is proposed, aiming at accurately estimating the level and spatial distribution of scattered radiation background in each projection. This will allow for effectively reducing streak and cupping artifacts due to scattering in cone-beam CT applications.

  7. Mapping Pn amplitude spreading and attenuation in Asia

    SciTech Connect

    Yang, Xiaoning; Phillips, William S; Stead, Richard J

    2010-12-06

    Pn travels most of its path in the mantle lid. Mapping the lateral variation of Pn amplitude attenuation sheds light on material properties and dynamics of the uppermost region of the mantle. Pn amplitude variation depends on the wavefront geometric spreading as well as material attenuation. We investigated Pn geometric spreading, which is much more complex than a traditionally assumed power-law spreading model, using both synthetic and observed amplitude data collected in Asia. We derived a new Pn spreading model based on the formulation that was proposed previously to account for the spherical shape of the Earth (Yang et. al., BSSA, 2007). New parameters derived for the spreading model provide much better correction for Pn amplitudes in terms of residual behavior. Because we used observed Pn amplitudes to construct the model, the model incorporates not only the effect of the Earth's spherical shape, but also the effect of potential upper-mantle velocity gradients in the region. Using the new spreading model, we corrected Pn amplitudes measured at 1, 2, 4 and 6 Hz and conducted attenuation tomography. The resulting Pn attenuation model correlates well with the regional geology. We see high attenuation in regions such as northern Tibetan Plateau and the western Pacific subduction zone, and low attenuation for stable blocks such as Sichuan and Tarim basins.

  8. Monitored natural attenuation.

    PubMed

    Jørgensen, Kirsten S; Salminen, Jani M; Björklöf, Katarina

    2010-01-01

    Monitored natural attenuation (MNA) is an in situ remediation technology that relies on naturally occurring and demonstrable processes in soil and groundwater which reduce the mass and concentration of the contaminants. Natural attenuation (NA) involves both aerobic and anaerobic degradation of the contaminants due to the fact that oxygen is used up near the core of the contaminant plume. The aerobic and anaerobic microbial processes can be assessed by microbial activity measurements and molecular biology methods in combination with chemical analyses. The sampling and knowledge on the site conditions are of major importance for the linkage of the results obtained to the conditions in situ. Rates obtained from activity measurements can, with certain limitations, be used in modeling of the fate of contaminants whereas most molecular methods mainly give qualitative information on the microbial community and gene abundances. However, molecular biology methods are fast and describe the in situ communities and avoid the biases inherent to activity assays requiring laboratory incubations.

  9. Effects of elastic focusing on global models of Rayleigh wave attenuation

    NASA Astrophysics Data System (ADS)

    Bao, Xueyang; Dalton, Colleen A.; Ritsema, Jeroen

    2016-11-01

    Rayleigh wave amplitudes are the primary data set used for imaging shear attenuation in the upper mantle on a global scale. In addition to attenuation, surface-wave amplitudes are influenced by excitation at the earthquake source, focusing and scattering by elastic heterogeneity, and local structure at the receiver and the instrument response. The challenge of isolating the signal of attenuation from these other effects limits both the resolution of global attenuation models and the level of consistency between different global attenuation studies. While the source and receiver terms can be estimated using relatively simple approaches, focusing effects on amplitude are a large component of the amplitude signal and are sensitive to multiscale velocity anomalies. In this study we investigate how different theoretical treatments for focusing effects on Rayleigh wave amplitude influence the retrieved attenuation models. A new data set of fundamental-mode Rayleigh wave phase and amplitude at periods of 50 and 100 sis analysed. The amplitudes due to focusing effects are predicted using the great-circle ray approximation (GCRA), exact ray theory (ERT), and finite-frequency theory (FFT). Phase-velocity maps expanded to spherical-harmonic degree 20 and degree 40 are used for the predictions. After correction for focusing effects, the amplitude data are inverted for global attenuation maps and frequency-dependent source and receiver correction factors. The degree-12 attenuation maps, based on different corrections for focusing effects, all contain the same large-scale features, though the magnitude of the attenuation variations depends on the focusing correction. The variance reduction of the amplitudes strongly depends on the predicted focusing amplitudes, with the highest variance reduction for the ray-based approaches at 50 s and for FFT at 100 s. Although failure to account for focusing effects introduces artefacts into the attenuation models at higher spherical

  10. Fluid dynamic bowtie attenuators

    NASA Astrophysics Data System (ADS)

    Szczykutowicz, Timothy P.; Hermus, James

    2015-03-01

    Fluence field modulated CT allows for improvements in image quality and dose reduction. To date, only 1-D modulators have been proposed, the extension to 2-D modulation is difficult with solid-metal attenuation-based modulators. This work proposes to use liquids and gas to attenuate the x-ray beam which can be arrayed allowing for 2-D fluence modulation. The thickness of liquid and the pressure for a given path length of gas were determined that provided the same attenuation as 30 cm of soft tissue at 80, 100, 120, and 140 kV. Gaseous Xenon and liquid Iodine, Zinc Chloride, and Cerium Chloride were studied. Additionally, we performed some proof-of-concept experiments in which (1) a single cell of liquid was connected to a reservoir which allowed the liquid thickness to be modulated and (2) a 96 cell array was constructed in which the liquid thickness in each cell was adjusted manually. Liquid thickness varied as a function of kV and chemical composition, with Zinc Chloride allowing for the smallest thickness; 1.8, 2.25, 3, and 3.6 cm compensated for 30 cm of soft tissue at 80, 100, 120, and 140 kV respectively. The 96 cell Iodine attenuator allowed for a reduction in both dynamic range to the detector and scatter to primary ratio. Successful modulation of a single cell was performed at 0, 90, and 130 degrees using a simple piston/actuator. The thickness of liquids and the Xenon gas pressure seem logistically implementable within the constraints of CBCT and diagnostic CT systems.

  11. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  12. Simultaneous iterative reconstruction for emission and attenuation images in positron emission tomography.

    PubMed

    Glatting, G; Wuchenauer, M; Reske, S N

    2000-09-01

    The quality of the attenuation correction strongly influences the outcome of the reconstructed emission scan in positron emission tomography. Usually the attenuation correction factors are calculated from the transmission and blank scan and thereafter applied during the reconstruction on the emission data. However, this is not an optimal treatment of the available data, because the emission data themselves contain additional information about attenuation: The optimal treatment must use this information for the determination of the attenuation correction factors. Therefore, our purpose is to investigate a simultaneous emission and attenuation image reconstruction using a maximum likelihood estimator, which takes the attenuation information in the emission data into account. The total maximum likelihood function for emission and transmission is used to derive a one-dimensional Newton-like algorithm for the calculation of the emission and attenuation image. Log-likelihood convergence, mean differences, and the mean of squared differences for the emission image and the attenuation correction factors of a mathematical thorax phantom were determined and compared. As a result we obtain images improved with respect to log likelihood in all cases and with respect to our figures of merit in most cases. We conclude that the simultaneous reconstruction can improve the performance of image reconstruction.

  13. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  14. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  15. Study of Dual-Wavelength PIA Estimates: Ratio of Ka- and Ku-band Attenuations Obtained from Measured DSD Data

    NASA Astrophysics Data System (ADS)

    Liao, L.; Meneghini, R.; Tokay, A.

    2014-12-01

    Accurate attenuation corrections to the measurements of the Ku- and Ka-band dual-wavelength precipitation radar (DPR) aboard the Global Precipitation Measurement (GPM) satellite is crucial for estimates of precipitation rate and microphysical properties of hydrometeors. Surface reference technique (SRT) provides a means to infer path-integrated attenuation (PIA) by comparing differences of normalized surface cross sections (σ0) between rain and rain-free areas. Although single-wavelength SRT has been widely used in attenuation correction for airborne/spaceborne radar applications, its accuracy relies on the variance of σ0 in rain-free region. Dual-wavelength surface reference technique (DSRT) has shown promising ways to improve accuracy in PIA estimates over single-wavelength as a result of that the variance of the difference of PIA between two wavelengths (δPIA) is much smaller than the variance of σ0 at single wavelength arising from high correlation of σ0 between Ku- and Ka-bands. However, derivation of PIA at either wavelength from DSRT requires an assumption of the ratio of Ka- and Ku-band PIAs (p). Inappropriate assumption of this ratio will lead to the bias of PIA estimates. In this study the ratio p will be investigated through measured DSD data. The attenuation coefficients at Ku and Ka bands are first computed directly from measured DSD spectra, and then regression analysis is performed to the data points (Ku- and Ka-band attenuation coefficients) in obtaining p values for rain. Taking an advantage of large collection of the DSD measurements from various GPM Ground Validation (GPM GV) programs, the results of the ratio p will be examined from different climatological regimes. Because PIA is affected by all types of hydrometeors contained in the columns of radar measurements, the synthetic profiles composed of different types of hydrometeors are constructed using measured DSD to look into impacts of different phase hydrometeors on the p values. To

  16. Accurate Measurement of Bone Density with QCT

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.

  17. Broadband Lg Attenuation Modeling in the Middle East

    SciTech Connect

    Pasyanos, M E; Matzel, E M; Walter, W R; Rodgers, A J

    2008-08-21

    We present a broadband tomographic model of Lg attenuation in the Middle East derived from source- and site-corrected amplitudes. Absolute amplitude measurements are made on hand-selected and carefully windowed seismograms for tens of stations and thousands of crustal earthquakes resulting in excellent coverage of the region. A conjugate gradient method is used to tomographically invert the amplitude dataset of over 8000 paths over a 45{sup o} x 40{sup o} region of the Middle East. We solve for Q variation, as well as site and source terms, for a wide range of frequencies ranging from 0.5-10 Hz. We have modified the standard attenuation tomography technique to more explicitly define the earthquake source expression in terms of the seismic moment. This facilitates the use of the model to predict the expected amplitudes of new events, an important consideration for earthquake hazard or explosion monitoring applications. The attenuation results have a strong correlation to tectonics. Shields have low attenuation, while tectonic regions have high attenuation, with the highest attenuation at 1 Hz is found in eastern Turkey. The results also compare favorably to other studies in the region made using Lg propagation efficiency, Lg/Pg amplitude ratios and two-station methods. We tomographically invert the amplitude measurements for each frequency independently. In doing so, it appears the frequency-dependence of attenuation is not compatible with the power law representation of Q(f), an assumption that is often made.

  18. Ultrasonic attenuation in pearlitic steel.

    PubMed

    Du, Hualong; Turner, Joseph A

    2014-03-01

    Expressions for the attenuation coefficients of longitudinal and transverse ultrasonic waves are developed for steel with pearlitic microstructure. This type of lamellar duplex microstructure influences attenuation because of the lamellar spacing. In addition, longitudinal attenuation measurements were conducted using an unfocused transducer with 10 MHz central frequency on the cross section of a quenched railroad wheel sample. The dependence of longitudinal attenuation on the pearlite microstructure is observed from the changes of longitudinal attenuation from the quenched tread surface to deeper locations. The results show that the attenuation value is lowest and relatively constant within the quench depth, then increases linearly. The experimental results demonstrate a reasonable agreement with results from the theoretical model. Ultrasonic attenuation provides an important non-destructive method to evaluate duplex microstructure within grains which can be implemented for quality control in conjunction with other manufacturing processes.

  19. A design of backing seat and gasket assembly in diamond anvil cell for accurate single crystal x-ray diffraction to 5 GPa

    NASA Astrophysics Data System (ADS)

    Komatsu, K.; Kagi, H.; Yasuzuka, T.; Koizumi, T.; Iizuka, R.; Sugiyama, K.; Yokoyama, Y.

    2011-10-01

    We designed a new cell assembly of diamond anvil cells for single crystal x-ray diffraction under pressure and demonstrate the application of the cell to the crystallographic studies for ice VI and ethanol high-pressure (HP) phase at 0.95(5) GPa and 1.95(2) GPa, respectively. The features of the assembly are: (1) the platy anvil and unique-shaped backing seat (called as "Wing seat") allowing the extremely wide opening angle up to ±65°, (2) the PFA-bulk metallic glass composite gasket allowing the easy attenuation correction and less background. Thanks to the designed assembly, the Rint values after attenuation corrections are fairly good (0.0125 and 0.0460 for ice VI and ethanol HP phase, respectively), and the errors of the refined parameters are satisfactory small even for hydrogen positions, those are comparable to the results which obtained at ambient conditions. The result for ice VI is in excellent agreement with the previous study, and that for ethanol HP phase has remarkable contributions to the revision to its structure; the H12 site, which makes gauche molecules with O1, C2, and C3 sites, may not exist so that only trans conformers are present at least at 1.95(2) GPa. The accurate intensities using the cell assembly allow us to extract the electron density for ethanol HP phase by the maximum entropy method.

  20. A high-impedance attenuator for measurement of high-voltage nanosecond-range pulses.

    PubMed

    Yu, Binxiong; Liu, Jinliang; Zhang, Tianyang; Hong, Zhiqiang

    2013-05-01

    A novel kind of high-impedance cable attenuator for measurement of high-voltage ns-range pulses is investigated in this paper. The input and output ports of the proposed attenuator were both high-impedance ports, and good pulse response characteristics of the proposed attenuator were obtained with pulse response time less than 1 ns. According to the requirement of measurement, two attenuators with lengths at 14 m and 0.7 m were developed with response time of 1 ns and 20 ns, and the attenuation coefficient of 96 and 33.5, respectively. The attenuator with the length of 14 m was used as a secondary-stage attenuator of a capacitive divider to measure the high-voltage pulses at several hundred ns range. The waveform was improved by the proposed attenuator in contrast to the result only measured by the same capacitive divider and a long cable line directly. The 0.7 m attenuator was also used as a secondary-stage attenuator of a standard resistant divider for an accurate measurement of high-voltage pulses at 100 ns range. The proposed cable attenuator can be used to substitute the traditional secondary-stage attenuators for the measurement of high-voltage pulses.

  1. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  2. Universality of quantum gravity corrections.

    PubMed

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  3. Practical correction procedures for elastic electron scattering effects in ARXPS

    NASA Astrophysics Data System (ADS)

    Lassen, T. S.; Tougaard, S.; Jablonski, A.

    2001-06-01

    Angle-resolved XPS and AES (ARXPS and ARAES) are widely used for determination of the in-depth distribution of elements in the surface region of solids. It is well known that elastic electron scattering has a significant effect on the intensity as a function of emission angle and that this has a great influence on the determined overlayer thicknesses by this method. However the applied procedures for ARXPS and ARAES generally neglect this because no simple and practical procedure for correction has been available. However recently, new algorithms have been suggested. In this paper, we have studied the efficiency of these algorithms to correct for elastic scattering effects in the interpretation of ARXPS and ARAES. This is done by first calculating electron distributions by Monte Carlo simulations for well-defined overlayer/substrate systems and then to apply the different algorithms. We have found that an analytical formula based on a solution of the Boltzmann transport equation provides a good account for elastic scattering effects. However this procedure is computationally very slow and the underlying algorithm is complicated. Another much simpler algorithm, proposed by Nefedov and coworkers, was also tested. Three different ways of handling the scattering parameters within this model were tested and it was found that this algorithm also gives a good description for elastic scattering effects provided that it is slightly modified so that it takes into account the differences in the transport properties of the substrate and the overlayer. This procedure is fairly simple and is described in detail. The model gives a much more accurate description compared to the traditional straight-line approximation (SLA). However it is also found that when attenuation lengths instead of inelastic mean free paths are used in the simple SLA formalism, the effects of elastic scattering are also reasonably well accounted for. Specifically, from a systematic study of several

  4. Digitally Controlled Beam Attenuator

    NASA Astrophysics Data System (ADS)

    Peppler, W. W.; Kudva, B.; Dobbins, J. T.; Lee, C. S.; Van Lysel, M. S.; Hasegawa, B. H.; Mistretta, C. A.

    1982-12-01

    In digital fluorographic techniques the video camera must accommodate a wide dynamic range due to the large variation in the subject thickness within the field of view. Typically exposure factors and the optical aperture are selected such that the maximum video signal is obtained in the most transmissive region of the subject. Consequently, it has been shown that the signal-to-noise ratio is severely reduced in the dark regions. We have developed a prototype digital beam attenuator (DBA) which will alleviate this and some related problems in digital fluorography. The prototype DBA consists of a 6x6 array of pistons which are individually controlled. A membrane containing an attenuating solu-tion of (CeC13) in water and the piston matrix are placed between the x-ray tube and the subject. Under digital control the pistons are moved into the attenuating material in order to adjust the beam intensity over each of the 36 cells. The DBA control unit which digitizes the image during patient positioning will direct the pistons under hydraulic control to produce a uniform x-ray field exiting the subject. The pistons were designed to produce very little structural background in the image. In subtraction studies any structure would be cancelled. For non-subtraction studies such as cine-cardiology we are considering higher cell densities (eg. 64x64). Due to the narrow range of transmission provided by the DBA, in such studies ultra-high contrast films could be used to produce a high resolution quasi-subtraction display. Additional benefits of the DBA are: 1) reduced dose to the bright image areas when the dark areas are properly exposed. 2) improved scatter and glare to primary ratios, leading to improved contrast in the dark areas.

  5. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  6. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899), and effective on August 6, 2012. That final rule amended the NRC regulations to make technical... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear...

  7. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  8. Radar attenuation and temperature within the Greenland Ice Sheet

    USGS Publications Warehouse

    MacGregor, Joseph A; Li, Jilu; Paden, John D; Catania, Ginny A; Clow, Gary D.; Fahnestock, Mark A; Gogineni, Prasad S.; Grimm, Robert E.; Morlighem, Mathieu; Nandi, Soumyaroop; Seroussi, Helene; Stillman, David E

    2015-01-01

    The flow of ice is temperature-dependent, but direct measurements of englacial temperature are sparse. The dielectric attenuation of radio waves through ice is also temperature-dependent, and radar sounding of ice sheets is sensitive to this attenuation. Here we estimate depth-averaged radar-attenuation rates within the Greenland Ice Sheet from airborne radar-sounding data and its associated radiostratigraphy. Using existing empirical relationships between temperature, chemistry, and radar attenuation, we then infer the depth-averaged englacial temperature. The dated radiostratigraphy permits a correction for the confounding effect of spatially varying ice chemistry. Where radar transects intersect boreholes, radar-inferred temperature is consistently higher than that measured directly. We attribute this discrepancy to the poorly recognized frequency dependence of the radar-attenuation rate and correct for this effect empirically, resulting in a robust relationship between radar-inferred and borehole-measured depth-averaged temperature. Radar-inferred englacial temperature is often lower than modern surface temperature and that of a steady state ice-sheet model, particularly in southern Greenland. This pattern suggests that past changes in surface boundary conditions (temperature and accumulation rate) affect the ice sheet's present temperature structure over a much larger area than previously recognized. This radar-inferred temperature structure provides a new constraint for thermomechanical models of the Greenland Ice Sheet.

  9. Towards a Global Upper Mantle Attenuation Model

    NASA Astrophysics Data System (ADS)

    Karaoglu, Haydar; Romanowicz, Barbara

    2015-04-01

    Global anelastic tomography is crucial for addressing the nature of heterogeneity in the Earth's interior. The intrinsic attenuation manifests itself through dispersion and amplitude decay. These are contaminated by elastic effects such as (de)focusing and scattering. Therefore, mapping anelasticity accurately requires separation of elastic effects from the anelastic ones. To achieve this, a possible approach is to try and first predict elastic effects through the computation of seismic waveforms in a high resolution 3D elastic model, which can now be achieved accurately using numerical wavefield computations. Building upon the recent construction of such a whole mantle elastic and radially anisotropic shear velocity model (SEMUCB_WM1, French and Romanowicz, 2014), which will be used as starting model, our goal is to develop a higher resolution 3D attenuation model of the upper mantle based on full waveform inversion. As in the development of SEMUCB_WM1, forward modeling will be performed using the spectral element method, while the inverse problem will be treated approximately, using normal mode asymptotics. Both fundamental and overtone time domain long period waveforms (T>60s) will be used from a dataset of over 200 events observed at several hundred stations globally. Here we present preliminary results of synthetic tests, exploring different iterative inversion strategies.

  10. Ultrasonic Attenuation in Zircaloy-4

    SciTech Connect

    Gomez, M.P.; Banchik, A.D.; Lopez Pumarega, M.I.; Ruzzante, J.E.

    2005-04-09

    In this work the relationship between Zircaloy-4 grain size and ultrasonic attenuation behavior was studied for longitudinal waves in the frequency range of 10-90 MHz. The attenuation was analyzed as a function of frequency for samples with different mechanical and heat treatments having recrystallized and Widmanstatten structures with different grain size. The attenuation behavior was analyzed by different scattering models, depending on grain size, wavelength and frequency.

  11. Chopping-Wheel Optical Attenuator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    1988-01-01

    Star-shaped rotating chopping wheel provides adjustable time-averaged attenuation of narrow beam of light without changing length of optical path or spectral distribution of light. Duty cycle or attenuation factor of chopped beam controlled by adjusting radius at which beam intersects wheel. Attenuation factor independent of wavelength. Useful in systems in which chopping frequency above frequency-response limits of photodetectors receiving chopped light. Used in systems using synchronous detection with lock-in amplifiers.

  12. Attenuation of laser generated ultrasound in steel at high temperatures; comparison of theory and experimental measurements.

    PubMed

    Kube, Christopher M

    2016-08-01

    This article reexamines some recently published laser ultrasound measurements of the longitudinal attenuation coefficient obtained during annealing of two steel samples (DP600 and S550). Theoretical attenuation models based on perturbation theory are compared to these experimental measurements. It is observed that the Rayleigh attenuation formulas provide the correct qualitative agreement, but overestimate the experimental values. The more general theoretical attenuation model considered here demonstrates strong quantitative agreement, which highlights the applicability of the model during real-time metal processing. PMID:27235777

  13. Atmospheric extinction in solar tower plants: absorption and broadband correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-08-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrated solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in ray-tracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested, and more than 19 months of measurements were collected and compared at the Plataforma Solar de Almería. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for concentrated solar power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the time-dependent solar spectrum which is reflected by the collector. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the absorption and broadband correction (ABC) procedure, additional

  14. Corrective Jaw Surgery

    MedlinePlus

    ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  15. Natural attenuation of contaminated soils.

    PubMed

    Mulligan, Catherine N; Yong, Raymond N

    2004-06-01

    Natural attenuation is increasing in use as a low cost means of remediating contaminated soil and groundwater. Modelling of contaminant migration plays a key role in evaluating natural attenuation as a remediation option and in ensuring that there will be no adverse impact on humans and the environment. During natural attenuation, the contamination must be characterized thoroughly and monitored through the process. In this paper, attenuation mechanisms for both organic and inorganic contaminants, use of models and protocols, role of monitoring and field case studies will be reviewed.

  16. Lg Attenuation of the Western United States

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Ranasinghe, N. R.; Ni, J.; Sandvol, E. A.

    2014-12-01

    Lg waveforms recorded by EarthScope's Transportable Array (TA) are used to estimate Lg Q in the Western United States (WUS). Attenuation is calculated based on Lg spectral amplitudes filtered at a narrow band from 0.5 to 1.5 Hz with a central frequency of 1 Hz. The two-station and reverse two-station techniques were used to calculate Qo values. 398 events occurring from 2005 to 2009 and ranging from magnitude 3 to magnitude 6 were used in this study. The geometric spreading term can be determined by using a three-dimensional linear fit of the amplitude ratios versus epicentral distances to two stations. The slope of this line provides the geometric spreading term we use to calculate Lg Qo values of WUS. The results show high Q regions (low attenuation) corresponding to the Colorado Plateau (CP), the Rocky Mountains (RM), the Columbia Plateau (COP), and the Sierra Nevada Mountains (SNM). Regions of low Q (high attenuation) are seen along the Snake River Plain (SRP), the Rio Grande Rift (RGR), the Cascade Mountains (CM), and in east and west of the Basin and Range (BR) where tectonic activity is more active than the central part of the BR. A positive correlation between high heat flow, recent tectonic activity and Q was observed. Areas with low heat flow, thin sediment cover, and no recent tectonic activity were observed to have consistently high Q. These new models use two-station and reversed two-station methods and provide a comparison with previous studies and better constrain regions with high attenuation. This increase in detail can improve high frequency ground motion predictions of future large earthquakes for more accurate hazard assessment and improve overall understanding of the structure and assemblage of the WUS.

  17. Surprisingly low frequency attenuation effects in long tubes when measuring turbulent fluxes at tall towers

    NASA Astrophysics Data System (ADS)

    Ibrom, Andreas; Brændholt, Andreas; Pilegaard, Kim

    2016-04-01

    The eddy covariance technique relies on the fast and accurate measurement of gas concentration fluctuations. While for some gasses robust and compact sensors are available, measurement of, e.g., non CO2 greenhouse gas fluxes is often performed with sensitive equipment that cannot be run on a tower without massively disturbing the wind field. To measure CO and N2O fluxes, we installed an eddy covariance system at a 125 m mast, where the gas analyser was kept in a laboratory close to the tower and the sampling was performed using a 150 m long tube with a gas intake at 96 m height. We investigated the frequency attenuation and the time lag of the N2O and CO concentration measurements with a concentration step experiment. The results showed surprisingly high cut-off frequencies (close to 2 Hz) and small low-pass filter induced time lags (< 0.3 s), which were similar for CO and N2O. The results indicate that the concentration signal was hardly biased during the ca 10 s travel through the tube. Due to the larger turbulence time scales at large measurement heights the low-pass correction was for the majority of the measurements < 5%. For water vapour the tube attenuation was massive, which had, however, a positive effect by reducing both the water vapour dilution correction and the cross sensitivity effects on the N2O and CO flux measurements. Here we present the set-up of the concentration step change experiment and its results and compare them with recently developed theories for the behaviour of gases in turbulent tube flows.

  18. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging

    NASA Astrophysics Data System (ADS)

    Ladefoged, Claes N.; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E.; Andersen, Flemming L.

    2015-10-01

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [18F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R2* values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within  ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.

  19. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging.

    PubMed

    Ladefoged, Claes N; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E; Andersen, Flemming L

    2015-10-21

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [(18)F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R(*)2 values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within  ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.

  20. 2001 Bhuj, India, earthquake engineering seismoscope recordings and Eastern North America ground-motion attenuation relations

    USGS Publications Warehouse

    Cramer, C.H.; Kumar, A.

    2003-01-01

    Engineering seismoscope data collected at distances less than 300 km for the M 7.7 Bhuj, India, mainshock are compatible with ground-motion attenuation in eastern North America (ENA). The mainshock ground-motion data have been corrected to a common geological site condition using the factors of Joyner and Boore (2000) and a classification scheme of Quaternary or Tertiary sediments or rock. We then compare these data to ENA ground-motion attenuation relations. Despite uncertainties in recording method, geological site corrections, common tectonic setting, and the amount of regional seismic attenuation, the corrected Bhuj dataset agrees with the collective predictions by ENA ground-motion attenuation relations within a factor of 2. This level of agreement is within the dataset uncertainties and the normal variance for recorded earthquake ground motions.

  1. A new technique to characterize CT scanner bow-tie filter attenuation and applications in human cadaver dosimetry simulations

    SciTech Connect

    Li, Xinhua; Shi, Jim Q.; Zhang, Da; Singh, Sarabjeet; Padole, Atul; Otrakji, Alexi; Kalra, Mannudeep K.; Liu, Bob; Xu, X. George

    2015-11-15

    Purpose: To present a noninvasive technique for directly measuring the CT bow-tie filter attenuation with a linear array x-ray detector. Methods: A scintillator based x-ray detector of 384 pixels, 307 mm active length, and fast data acquisition (model X-Scan 0.8c4-307, Detection Technology, FI-91100 Ii, Finland) was used to simultaneously detect radiation levels across a scan field-of-view. The sampling time was as short as 0.24 ms. To measure the body bow-tie attenuation on a GE Lightspeed Pro 16 CT scanner, the x-ray tube was parked at the 12 o’clock position, and the detector was centered in the scan field at the isocenter height. Two radiation exposures were made with and without the bow-tie in the beam path. Each readout signal was corrected for the detector background offset and signal-level related nonlinear gain, and the ratio of the two exposures gave the bow-tie attenuation. The results were used in the GEANT4 based simulations of the point doses measured using six thimble chambers placed in a human cadaver with abdomen/pelvis CT scans at 100 or 120 kV, helical pitch at 1.375, constant or variable tube current, and distinct x-ray tube starting angles. Results: Absolute attenuation was measured with the body bow-tie scanned at 80–140 kV. For 24 doses measured in six organs of the cadaver, the median or maximum difference between the simulation results and the measurements on the CT scanner was 8.9% or 25.9%, respectively. Conclusions: The described method allows fast and accurate bow-tie filter characterization.

  2. Growth Attenuation Therapy.

    PubMed

    Kerruish, Nikki

    2016-01-01

    The "Ashley treatment" has provoked much debate and remains ethically controversial. Given that more children are being referred for such treatment, there remains a need to provide advice to clinicians and ethics committees regarding how to respond to such requests. This article contends that there is one particularly important gap in the existing literature about growth attenuation therapy (GAT) (one aspect of the Ashley treatment): the views of parents of children with profound cognitive impairment (PCI) remain significantly underrepresented. The article attempts to redress this balance by analyzing published accounts both from parents of children who have received GAT and from parents who oppose treatment. Using these accounts, important points are illuminated regarding how parents characterize benefits and harms, and their responsibilities as surrogate decisionmakers. This analysis could contribute to decisionmaking about future requests for GAT and might also have wider relevance to healthcare decisionmaking for children with PCI. PMID:26788948

  3. Fiber optic attenuator

    NASA Technical Reports Server (NTRS)

    Buzzetti, Mike F. (Inventor)

    1994-01-01

    A fiber optic attenuator of the invention is a mandrel structure through which a bundle of optical fibers is wrapped around in a complete circle. The mandrel structure includes a flexible cylindrical sheath through which the bundle passes. A set screw on the mandrel structure impacts one side of the sheath against two posts on the opposite side of the sheath. By rotating the screw, the sheath is deformed to extend partially between the two posts, bending the fiber optic bundle to a small radius controlled by rotating the set screw. Bending the fiber optic bundle to a small radius causes light in each optical fiber to be lost in the cladding, the amount depending upon the radius about which the bundle is bent.

  4. Correctness issues in workflow management

    NASA Astrophysics Data System (ADS)

    Kamath, Mohan; Ramamritham, Krithi

    1996-12-01

    Workflow management is a technique to integrate and automate the execution of steps that comprise a complex process, e.g., a business process. Workflow management systems (WFMSs) primarily evolved from industry to cater to the growing demand for office automation tools among businesses. Coincidentally, database researchers developed several extended transaction models to handle similar applications. Although the goals of both the communities were the same, the issues they focused on were different. The workflow community primarily focused on modelling aspects to accurately capture the data and control flow requirements between the steps that comprise a workflow, while the database community focused on correctness aspects to ensure data consistency of sub-transactions that comprise a transaction. However, we now see a confluence of some of the ideas, with additional features being gradually offered by WFMSs. This paper provides an overview of correctness in workflow management. Correctness is an important aspect of WFMSs and a proper understanding of the available concepts and techniques by WFMS developers and workflow designers will help in building workflows that are flexible enough to capture the requirements of real world applications and robust enough to provide the necessary correctness and reliability properties. We first enumerate the correctness issues that have to be considered to ensure data consistency. Then we survey techniques that have been proposed or are being used in WFMSs for ensuring correctness of workflows. These techniques emerge from the areas of workflow management, extended transaction models, multidatabases and transactional workflows. Finally, we present some open issues related to correctness of workflows in the presence of concurrency and failures.

  5. Rethinking political correctness.

    PubMed

    Ely, Robin J; Meyerson, Debra E; Davidson, Martin N

    2006-09-01

    Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.

  6. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  7. Fast self-attenuation determination of low energy gamma lines.

    PubMed

    Haddad, Kh

    2016-09-01

    Linear correlation between self-attenuation factor of 46.5keV ((210)Pb) and the 1764keV, 46.5 counts ratio has been developed in this work using triple superphosphate fertilizer samples. Similar correlation has been also developed for 63.3keV ((238)U). This correlation offers simple, fast, and accurate technique for self-attenuation determination of low energy gamma lines. Utilization of 46.5keV in the ratio has remarkably improved the technique sensitivity in comparison with other work, which used similar concept. The obtained results were used to assess the validity of transmission technique. PMID:27337648

  8. Patient position alters attenuation effects in multipinhole cardiac SPECT

    SciTech Connect

    Timmins, Rachel; Ruddy, Terrence D.; Wells, R. Glenn

    2015-03-15

    position-dependent changes were removed with attenuation correction. Conclusions: Translation of a source relative to a multipinhole camera caused only small changes in homogeneous phantoms with SPS changing <1.5. Inhomogeneous attenuating media cause much larger changes to occur when the source is translated. Changes in SPS of up to six were seen in an anthropomorphic phantom for axial translations. Attenuation correction removes the position-dependent changes in attenuation.

  9. Suicide Risk: Amplifiers and Attenuators.

    ERIC Educational Resources Information Center

    Plutchik, Robert; Van Praag, Herman M.

    1994-01-01

    Attempts to integrate findings on correlates of suicide and violent risk in terms of a theory called a two-stage model of countervailing forces, which assumes that the strength of aggressive impulses is modified by amplifiers and attenuators. The vectorial interaction of amplifiers and attenuators creates an unstable equilibrium making prediction…

  10. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  11. Estimation of canopy attenuation for active/passive microwave soil moisture retrieval algorithms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper discusses the importance of the proper characterization of scattering and attenuation in trees needed for accurate retrieval of soil moisture in the presence of trees. Emphasis is placed on determining an accurate estimation of the propagation properties of a vegetation canopy using the c...

  12. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  13. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  14. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    SciTech Connect

    Narita, Y. |; Eberl, S.; Nakamura, T.

    1996-12-31

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  16. Attenuation of Vaccinia Virus.

    PubMed

    Yakubitskiy, S N; Kolosova, I V; Maksyutov, R A; Shchelkunov, S N

    2015-01-01

    Since 1980, in the post-smallpox vaccination era the human population has become increasingly susceptible compared to a generation ago to not only the variola (smallpox) virus, but also other zoonotic orthopoxviruses. The need for safer vaccines against orthopoxviruses is even greater now. The Lister vaccine strain (LIVP) of vaccinia virus was used as a parental virus for generating a recombinant 1421ABJCN clone defective in five virulence genes encoding hemagglutinin (A56R), the IFN-γ-binding protein (B8R), thymidine kinase (J2R), the complement-binding protein (C3L), and the Bcl-2-like inhibitor of apoptosis (N1L). We found that disruption of these loci does not affect replication in mammalian cell cultures. The isogenic recombinant strain 1421ABJCN exhibits a reduced inflammatory response and attenuated neurovirulence relative to LIVP. Virus titers of 1421ABJCN were 3 lg lower versus the parent VACV LIVP when administered by the intracerebral route in new-born mice. In a subcutaneous mouse model, 1421ABJCN displayed levels of VACV-neutralizing antibodies comparable to those of LIVP and conferred protective immunity against lethal challenge by the ectromelia virus. The VACV mutant holds promise as a safe live vaccine strain for preventing smallpox and other orthopoxvirus infections. PMID:26798498

  17. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  18. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  19. Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…

  20. Attenuation of Shocks through Porous Media

    NASA Astrophysics Data System (ADS)

    Lind, Charles A.; Cybyk, Bohdan Z.; Boris, Jay P.

    1998-11-01

    Structures designed to mitigate the effects of blast and shock waves are important for both accidental and controlled explosions. The net effect of these mitigating structures is to reduce the strength of the transmitted shock thereby reducing the dynamic pressure loading on nearby objects. In the present study, the attenuation of planar blast and shock waves by passage through structured media is numerically studied with the FAST3D model. The FAST3D model is a state-of-the-art, portable, three-dimensional computational fluid dynamics model based on Flux-Corrected Transport and uses the Virtual Cell Embedding algorithm for simulating complex geometries. The effects of media placement, spacing, orientation, and area blockage are parametrically studied to enhance the understanding of the complex processes involved and to determine ways to minimize the adverse effects of these blast waves.

  1. Optimization of a Model Corrected Blood Input Function from Dynamic FDG-PET Images of Small Animal Heart In Vivo

    PubMed Central

    Zhong, Min; Kundu, Bijoy K.

    2013-01-01

    Quantitative evaluation of dynamic Positron Emission Tomography (PET) of mouse heart in vivo is challenging due to the small size of the heart and limited intrinsic spatial resolution of the PET scanner. Here, we optimized a compartment model which can simultaneously correct for spill over and partial volume effects for both blood pool and the myocardium, compute kinetic rate parameters and generate model corrected blood input function (MCBIF) from ordered subset expectation maximization – maximum a posteriori (OSEM-MAP) cardiac and respiratory gated 18F-FDG PET images of mouse heart with attenuation correction in vivo, without any invasive blood sampling. Arterial blood samples were collected from a single mouse to indicate the feasibility of the proposed method. In order to establish statistical significance, venous blood samples from n=6 mice were obtained at 2 late time points, when SP contamination from the tissue to the blood is maximum. We observed that correct bounds and initial guesses for the PV and SP coefficients accurately model the wash-in and wash-out dynamics of the tracer from mouse blood. The residual plot indicated an average difference of about 1.7% between the blood samples and MCBIF. The downstream rate of myocardial FDG influx constant, Ki (0.15±0.03 min−1), compared well with Ki obtained from arterial blood samples (P=0.716). In conclusion, the proposed methodology is not only quantitative but also reproducible. PMID:24741130

  2. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    SciTech Connect

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECT images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime

  3. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    ERIC Educational Resources Information Center

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  4. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  5. Repetition priming results in sensitivity attenuation

    PubMed Central

    Allenmark, Fredrik; Hsu, Yi-Fang; Roussel, Cedric; Waszak, Florian

    2015-01-01

    Repetition priming refers to the change in the ability to perform a task on a stimulus as a consequence of a former encounter with that very same item. Usually, repetition results in faster and more accurate performance. In the present study, we used a contrast discrimination protocol to assess perceptual sensitivity and response bias of Gabor gratings that are either repeated (same orientation) or alternated (different orientation). We observed that contrast discrimination performance is worse, not better, for repeated than for alternated stimuli. In a second experiment, we varied the probability of stimulus repetition, thus testing whether the repetition effect is due to bottom-up or top-down factors. We found that it is top-down expectation that determines the effect. We discuss the implication of these findings for repetition priming and related phenomena as sensory attenuation. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25819554

  6. First results from a prototype dynamic attenuator system

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Peng, Mark V.; May, Christopher A.; Shunhavanich, Picha; Pelc, Norbert J.

    2015-03-01

    The dynamic, piecewise-linear attenuator has been proposed as a concept which can shape the radiation flux incident on the patient. By reducing the signal to photon-rich measurements and increasing the signal to photon-starved measurements, the piecewise-linear attenuator has been shown to improve dynamic range, scatter, and variance and dose metrics in simulation. The piecewise-linear nature of the proposed attenuator has been hypothesized to mitigate artifacts at transitions by eliminating jump discontinuities in attenuator thickness at these points. We report the results of a prototype implementation of this concept. The attenuator was constructed using rapid prototyping technologies and was affixed to a tabletop x-ray system. Images of several sections of an anthropormophic pediatric phantom were produced and compared to those of the same system with uniform illumination. The thickness of the illuminated slab was limited by beam collimation and an analytic water beam hardening correction was used for both systems. Initial results are encouraging and show improved image quality, reduced dose and low artifact levels.

  7. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  8. Eyeglasses for Vision Correction

    MedlinePlus

    ... Stories Español Eye Health / Glasses & Contacts Eyeglasses for Vision Correction Dec. 12, 2015 Wearing eyeglasses is an easy way to correct refractive errors. Improving your vision with eyeglasses offers the opportunity to select from ...

  9. Illinois Corrections Project Report

    ERIC Educational Resources Information Center

    Hungerford, Jack

    1974-01-01

    The Illinois Corrections Project for Law-Focused Education, which brings law-focused curriculum into corrections institutions, was initiated in 1973 with a summer institute and includes programs in nine particpating institutions. (JH)

  10. Optimized ultrasonic attenuation measures for non-homogeneous materials.

    PubMed

    Genovés, V; Gosálbez, J; Carrión, A; Miralles, R; Payá, J

    2016-02-01

    In this paper the study of frequency-dependent ultrasonic attenuation in strongly heterogeneous materials is addressed. To determine the attenuation accurately over a wide frequency range, it is necessary to have suitable excitation techniques. Three kinds of transmitted signals have been analysed, grouped according to their bandwidth: narrowband and broadband signals. The mathematical formulation has revealed the relation between the distribution of energy in their spectra and their immunity to noise. Sinusoidal and burst signals have higher signal-to-noise ratios (SNRs) but need many measurements to cover their frequency range. However, linear swept-frequency signals (chirp) improve the effective bandwidth covering a wide frequency range with a single measurement and equivalent accuracy, at the expense of a lower SNR. In the case of highly attenuating materials, it is proposed to use different configurations of chirp signals, enabling injecting more energy, and therefore, improving the sensitivity of the technique without a high time cost. Thus, if the attenuation of the material and the sensitivity of the measuring equipment allows the use of broadband signals, the combination of this kind of signal and suitable signal processing results in an optimal estimate of frequency-dependent attenuation with a minimum measurement time. PMID:26432190

  11. Frequency-Dependent Attenuation of Blasting Vibration Waves

    NASA Astrophysics Data System (ADS)

    Zhou, Junru; Lu, Wenbo; Yan, Peng; Chen, Ming; Wang, Gaohui

    2016-10-01

    The dominant frequency, in addition to the peak particle velocity, is a critical factor for assessing adverse effects of the blasting vibration on surrounding structures; however, it has not been fully considered in blasting design. Therefore, the dominant frequency-dependent attenuation mechanism of blast-induced vibration is investigated in the present research. Starting with blasting vibration induced by a spherical charge propagating in an infinite viscoelastic medium, a modified expression of the vibration amplitude spectrum was derived to reveal the frequency dependency of attenuation. Then, ground vibration induced by more complex and more commonly used cylindrical charge that propagates in a semi-infinite viscoelastic medium was analyzed by numerical simulation. Results demonstrate that the absorptive property of the medium results in the frequency attenuation versus distance, whereas a rapid drop or fluctuation occurs during the attenuation of ground vibration. Fluctuation usually appears at moderate to far field, and the dominant frequency generally decreases to half the original value when rapid drop occurs. The decay rate discrepancy between different frequency components and the multimodal structure of vibration spectrum lead to the unsmooth frequency-dependent attenuation. The above research is verified by two field experiments. Furthermore, according to frequency-based vibration standards, frequency drop and fluctuation should be considered when evaluating blast safety. An optimized piecewise assessment is proposed for more accurate evaluation: With the frequency drop point as the breakpoint, the assessment is divided into two independent sections along the propagating path.

  12. Teaching Politically Correct Language

    ERIC Educational Resources Information Center

    Tsehelska, Maryna

    2006-01-01

    This article argues that teaching politically correct language to English learners provides them with important information and opportunities to be exposed to cultural issues. The author offers a brief review of how political correctness became an issue and how being politically correct influences the use of language. The article then presents…

  13. Research in Correctional Rehabilitation.

    ERIC Educational Resources Information Center

    Rehabilitation Services Administration (DHEW), Washington, DC.

    Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…

  14. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  15. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. Ultrasonic Attenuation Results of Thermoplastic Resin Composites Undergoing Thermal and Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Madaras, Eric I.

    1998-01-01

    As part of an effort to obtain the required information about new composites for aviation use, materials and NDE researchers at NASA are jointly performing mechanical and NDE measurements on new composite materials. The materials testing laboratory at NASA is equipped with environmental chambers mounted on load frames that can expose composite materials to thermal and loading cycles representative of flight protocols. Applying both temperature and load simultaneously will help to highlight temperature and load interactions during the aging of these composite materials. This report highlights our initial ultrasonic attenuation results from thermoplastic composite samples that have undergone over 4000 flight cycles to date. Ultrasonic attenuation measurements are a standard method used to assess the effects of material degradation. Recently, researchers have shown that they could obtain adequate contrast in the evaluation of thermal degradation in thermoplastic composites by using frequencies of ultrasound on the order of 24 MHz. In this study, we address the relationship of attenuation measured at lower frequencies in thermoplastic composites undergoing both thermal and mechanical loading. We also compare these thermoplastic results with some data from thermoset composites undergoing similar protocols. The composite s attenuation is reported as the slope of attenuation with respect to frequency, defined as b = Da(f)/Df. The slope of attenuation is an attractive parameter since it is quantitative, yet does not require interface corrections like conventional quantitative attenuation measurements. This latter feature is a consequence of the assumption that interface correction terms are frequency independent. Uncertainty in those correction terms can compromise the value of conventional quantitative attenuation data. Furthermore, the slope of the attenuation more directly utilizes the bandwidth information and in addition, the bandwidth can be adjusted in the post

  17. The Utility of CBM Written Language Indices: An Investigation of Production-Dependent, Production-Independent, and Accurate-Production Scores

    ERIC Educational Resources Information Center

    Jewell, Jennifer; Malecki, Christine Kerres

    2005-01-01

    This study examined the utility of three categories of CBM written language indices including production-dependent indices (Total Words Written, Words Spelled Correctly, and Correct Writing Sequences), production-independent indices (Percentage of Words Spelled Correctly and Percentage of Correct Writing Sequences), and an accurate-production…

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2]. PMID:27671933

  4. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  5. Simultaneous iterative reconstruction of emission and attenuation images in positron emission tomography from emission data only.

    PubMed

    Landmann, M; Reske, S N; Glatting, G

    2002-09-01

    For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative.

  6. Flying helmet attenuation, and the measurement, with particular reference to the Mk 4 helmet

    NASA Astrophysics Data System (ADS)

    Rood, G. M.

    1981-06-01

    To predict the intelligibility of communication systems, it is necessary to be able to measure helmet attenuation accurately and repeatably, and it is this particular aspect which is highlighted. Some of the results from a comprehensive series of tests involving subjective and semiobjective measurement of the attenuation of noise by flying helmets are discussed. The analysis shows that the semiobjective method of ascertaining hearing protector or flying helmet attenuation, using miniature measuring microphones, is a viable alternative to the existing standard REAT methods, and has considerable advantages in providing more useful information in less time. Additionally, high correlations exist between laboratory and in-flight mesurements of attenuation, clearly indicating that laboratory measurements reproduce helmet attenuation actually found in the air.

  7. Source distribution dependent scatter correction for PVI

    SciTech Connect

    Barney, J.S.; Harrop, R.; Dykstra, C.J. . School of Computing Science TRIUMF, Vancouver, British Columbia )

    1993-08-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction.

  8. Shuttle program: Computing atmospheric scale height for refraction corrections

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.

  9. Sound attenuation in magnetorheological fluids

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, J.; Elvira, L.; Resa, P.; Montero de Espinosa, F.

    2013-02-01

    In this work, the attenuation of ultrasonic elastic waves propagating through magnetorheological (MR) fluids is analysed as a function of the particle volume fraction and the magnetic field intensity. Non-commercial MR fluids made with iron ferromagnetic particles and two different solvents (an olive oil based solution and an Araldite-epoxy) were used. Particle volume fractions of up to 0.25 were analysed. It is shown that the attenuation of sound depends strongly on the solvent used and the volume fraction. The influence of a magnetic field up to 212 mT was studied and it was found that the sound attenuation increases with the magnetic intensity until saturation is reached. A hysteretic effect is evident once the magnetic field is removed.

  10. [Orthognathic surgery: corrective bone operations].

    PubMed

    Reuther, J

    2000-05-01

    The article reviews the history of orthognathic surgery from the middle of the last century up to the present. Initially, mandibular osteotomies were only performed in cases of severe malformations. But during the last century a precise and standardized procedure for correction of the mandible was established. Multiple modifications allowed control of small fragments, functionally stable osteosynthesis, and finally a precise positioning of the condyle. In 1955 Obwegeser and Trauner introduced the sagittal split osteotomy by an intraoral approach. It was the final breakthrough for orthognathic surgery as a standard treatment for corrections of the mandible. Surgery of the maxilla dates back to the nineteenth century. B. von Langenbeck from Berlin is said to have performed the first Le Fort I osteotomy in 1859. After minor changes, Wassmund corrected a posttraumatic malocclusion by a Le Fort I osteotomy in 1927. But it was Axhausen who risked the total mobilization of the maxilla in 1934. By additional modifications and further refinements, Obwegeser paved the way for this approach to become a standard procedure in maxillofacial surgery. Tessier mobilized the whole midface by a Le Fort III osteotomy and showed new perspectives in the correction of severe malformations of the facial bones, creating the basis of modern craniofacial surgery. While the last 150 years were distinguished by the creation and standardization of surgical methods, the present focus lies on precise treatment planning and the consideration of functional aspects of the whole stomatognathic system. To date, 3D visualization by CT scans, stereolithographic models, and computer-aided treatment planning and simulation allow surgery of complex cases and accurate predictions of soft tissue changes.

  11. Clarifying types of uncertainty: when are models accurate, and uncertainties small?

    PubMed

    Cox, Louis Anthony Tony

    2011-10-01

    Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.

  12. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-04-01

    In this report we will show results of seismic and well log derived attenuation attributes from a deep water Gulf of Mexico data set. This data was contributed by Burlington Resources and Seitel Inc. The data consists of ten square kilometers of 3D seismic data and three well penetrations. We have computed anomalous seismic absorption attributes on the seismic data and have computed Q from the well log curves. The results show a good correlation between the anomalous absorption (attenuation) attributes and the presence of gas as indicated by well logs.

  13. An accurate method for two-point boundary value problems

    NASA Technical Reports Server (NTRS)

    Walker, J. D. A.; Weigand, G. G.

    1979-01-01

    A second-order method for solving two-point boundary value problems on a uniform mesh is presented where the local truncation error is obtained for use with the deferred correction process. In this simple finite difference method the tridiagonal nature of the classical method is preserved but the magnitude of each term in the truncation error is reduced by a factor of two. The method is applied to a number of linear and nonlinear problems and it is shown to produce more accurate results than either the classical method or the technique proposed by Keller (1969).

  14. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  15. A technique for the correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.

    1973-01-01

    A technique is described by which an ERTS investigator can obtain absolute target reflectances by correcting spacecraft radiance measurements for variable target irradiance, atmospheric attenuation, and atmospheric backscatter. A simple measuring instrument and the necessary atmospheric measurements are discussed, and examples demonstrate the nature and magnitude of the atmospheric corrections.

  16. Attenuation of peak sound pressure levels of shooting noise by hearing protective earmuffs.

    PubMed

    Lenzuni, Paolo; Sangiorgi, Tommaso; Cerini, Luigi

    2012-01-01

    Transmission losses (TL) to highly impulsive signals generated by three firearms have been measured for two ear muffs, using both a head and torso simulator and a miniature microphone located at the ear canal entrance (MIRE technique). Peak SPL TL have been found to be well approximated by 40 ms short-L eq TL. This has allowed the use of transmissibilities and correction factors for bone conduction and physiological masking appropriate for continuous noise, for the calculation of REAT-type peak insertion losses (IL). Results indicate that peak IL can be well predicted by estimates based on one-third octave band 40 ms short L eq and manufacturer-declared (nominal) IL measured for continuous noise according to test standards. Such predictions tend to be more accurate at the high end of the range, while they are less reliable when the attenuation is lower. A user-friendly simplified prediction algorithm has also been developed, which only requires nominal IL and one-third octave sound exposure level spectra. Separate predictions are possible for IL in direct and diffuse sound fields, albeit with higher uncertainties, due to the smaller number of experimental data comprising the two separate datasets on which such predictions are based.

  17. Flagella Overexpression Attenuates Salmonella Pathogenesis

    PubMed Central

    Yang, Xinghong; Thornburg, Theresa; Suo, Zhiyong; Jun, SangMu; Robison, Amanda; Li, Jinquan; Lim, Timothy; Cao, Ling; Hoyt, Teri; Avci, Recep; Pascual, David W.

    2012-01-01

    Flagella are cell surface appendages involved in a number of bacterial behaviors, such as motility, biofilm formation, and chemotaxis. Despite these important functions, flagella can pose a liability to a bacterium when serving as potent immunogens resulting in the stimulation of the innate and adaptive immune systems. Previous work showing appendage overexpression, referred to as attenuating gene expression (AGE), was found to enfeeble wild-type Salmonella. Thus, this approach was adapted to discern whether flagella overexpression could induce similar attenuation. To test its feasibility, flagellar filament subunit FliC and flagellar regulon master regulator FlhDC were overexpressed in Salmonella enterica serovar Typhimurium wild-type strain H71. The results show that the expression of either FliC or FlhDC alone, and co-expression of the two, significantly attenuates Salmonella. The flagellated bacilli were unable to replicate within macrophages and thus were not lethal to mice. In-depth investigation suggests that flagellum-mediated AGE was due to the disruptive effects of flagella on the bacterial membrane, resulting in heightened susceptibilities to hydrogen peroxide and bile. Furthermore, flagellum-attenuated Salmonella elicited elevated immune responses to Salmonella presumably via FliC’s adjuvant effect and conferred robust protection against wild-type Salmonella challenge. PMID:23056473

  18. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Gary Mavko; Jack Dvorkin

    2002-07-01

    In fully-saturated rock and at ultrasonic frequencies, the microscopic squirt flow induced between the stiff and soft parts of the pore space by an elastic wave is responsible for velocity-frequency dispersion and attenuation. In the seismic frequency range, it is the macroscopic cross-flow between the stiffer and softer parts of the rock. We use the latter hypothesis to introduce simple approximate equations for velocity-frequency dispersion and attenuation in a fully water saturated reservoir. The equations are based on the assumption that in heterogeneous rock and at a very low frequency, the effective elastic modulus of the fully-saturated rock can be estimated by applying a fluid substitution procedure to the averaged (upscaled) dry frame whose effective porosity is the mean porosity and the effective elastic modulus is the Backus-average (geometric mean) of the individual dry-frame elastic moduli of parts of the rock. At a higher frequency, the effective elastic modulus of the saturated rock is the Backus-average of the individual fully-saturated-rock elastic moduli of parts of the rock. The difference between the effective elastic modulus calculated separately by these two methods determines the velocity-frequency dispersion. The corresponding attenuation is calculated from this dispersion by using (e.g.) the standard linear solid attenuation model.

  19. Detailed Study of Seismic Wave Attenuation in Carbonate Rocks: Application on Abu Dhabi Oil Fields

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2015-12-01

    Seismic wave attenuation is a promising attribute for the petroleum exploration, thanks to its high sensitivity to physical properties of subsurface. It can be used to enhance the seismic imaging and improve the geophysical interpretation which is crucial for reservoir characterization. However getting an accurate attenuation profile is not an easy task, this is due to complex mechanism of this parameter, although that many studies were carried out to understand it. The degree of difficulty increases for the media composed of carbonate rocks, known to be highly heterogeneous and with complex lithology. That is why few attenuation studies were done successfully in carbonate rocks. The main objectives of this study are, Getting an accurate and high resolution attenuation profiles from several oil fields. The resolution is very important target for us, because many reservoirs in Abu Dhabi oil fields are tight.Separation between different modes of wave attenuation (scattering and intrinsic attenuations).Correlation between the attenuation profiles and other logs (Porosity, resistivity, oil saturation…), in order to establish a relationship which can be used to detect the reservoir properties from the attenuation profiles.Comparison of attenuation estimated from VSP and sonic waveforms. Provide spatial distribution of attenuation in Abu Dhabi oil fields.To reach these objectives we implemented a robust processing flow and new methodology to estimate the attenuation from the downgoing waves of the compressional VSP data and waveforms acquired from several wells drilled in Abu Dhabi. The subsurface geology of this area is primarily composed of carbonate rocks and it is known to be highly fractured which complicates more the situation, then we separated successfully the intrinsic attenuation from the scattering. The results show that the scattering is significant and cannot be ignored. We found also a very interesting correlation between the attenuation profiles and the

  20. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method

    PubMed Central

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17–19 mCi of 99mTc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of 99mTc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  1. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  2. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan.

  3. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  4. Three-Dimensional Seismic Attenuation Structure in the Ryukyu Arc, Japan

    NASA Astrophysics Data System (ADS)

    Komatsu, M.; Takenaka, H.

    2015-12-01

    Tomographic studies have been conducted to retrieve 3D seismic attenuation structure around Japan Arc since 1980s. However, in the Ryukyu Arc, 3D attenuation structures has never been estimated. It is important to estimate the 3D attenuation structure in this region, since there are highly active volcanos and seismicity between the Okinawa Trough and the Ryukyu Trench. In this study, we estimate 3D seismic attenuation structure in the Ryukyu Arc. We use seismic waveform data recorded by seismic observation networks of NIED, JMA and Kagoshima University, from 2004/06 to 2014/05. We select seismic events of more than 4,500. Since the Ryukyu Arc region are so wide, we separate it into three subregions: Sakishima Islands, Okinawa Islands and Amami Islands subregions. Before calculating the attenuation quantity t*, the corner frequency of the source spectrum for each event is estimated by using an omega square model. The t* is estimated from the amplitude decay rate from the source-corrected spectra. We then invert the t* data to the attenuation structure by a 3D tomographic technique using the non-negative least squares method. Our estimated attenuation structure has the remarkable features: in Sakishima Islands subregion, high attenuation zone exists beneath northern Ishigaki Island. This region corresponds to the Okinawa Trough. High attenuation zone also exists beneath Hateruma Island in upper crust. It corresponds to the accretionary prism formed by subducting Philippine Sea Plate. In Amami Islands subregion, high attenuation zone is located along volcanic front. Low attenuation zone spreads over subducting Philippine Sea slab in all subregions.Acknowledgements: We used JMA Unified Hypocenter Catalogs and seismic waveform data recorded by NIED, JMA and Kagoshima University. We also used a computer program by Zhao et al. (1992, JGR) for the tomographic analysis.

  5. New analytical approach for neutron beam-hardening correction.

    PubMed

    Hachouf, N; Kharfi, F; Hachouf, M; Boucenna, A

    2016-01-01

    In neutron imaging, the beam-hardening effect has a significant effect on quantitative and qualitative image interpretation. This study aims to propose a linearization method for beam-hardening correction. The proposed method is based on a new analytical approach establishing the attenuation coefficient as a function of neutron energy. Spectrum energy shift due to beam hardening is studied on the basis of Monte Carlo N-Particle (MCNP) simulated data and the analytical data. Good agreement between MCNP and analytical values has been found. Indeed, the beam-hardening effect is well supported in the proposed method. A correction procedure is developed to correct the errors of beam-hardening effect in neutron transmission, and therefore for projection data correction. The effectiveness of this procedure is determined by its application in correcting reconstructed images.

  6. Corrective measures evaluation report for Tijeras Arroyo groundwater.

    SciTech Connect

    Witt, Johnathan L; Orr, Brennon R.; Dettmers, Dana L.; Hall, Kevin A.; Howard, M. Hope

    2005-08-01

    This Corrective Measures Evaluation report was prepared as directed by a Compliance Order on Consent issued by the New Mexico Environment Department to document the process of selecting the preferred remedial alternative for Tijeras Arroyo Groundwater. Supporting information includes background concerning the site conditions and potential receptors and an overview of work performed during the Corrective Measures Evaluation. The evaluation of remedial alternatives included identifying and describing four remedial alternatives, an overview of the evaluation criteria and approach, comparing remedial alternatives to the criteria, and selecting the preferred remedial alternative. As a result of the Corrective Measures Evaluation, monitored natural attenuation of the contaminants of concern (trichloroethene and nitrate) is the preferred remedial alternative for implementation as the corrective measure for Tijeras Arroyo Groundwater. Design criteria to meet cleanup goals and objectives and the corrective measures implementation schedule for the preferred remedial alternative are also presented.

  7. New analytical approach for neutron beam-hardening correction.

    PubMed

    Hachouf, N; Kharfi, F; Hachouf, M; Boucenna, A

    2016-01-01

    In neutron imaging, the beam-hardening effect has a significant effect on quantitative and qualitative image interpretation. This study aims to propose a linearization method for beam-hardening correction. The proposed method is based on a new analytical approach establishing the attenuation coefficient as a function of neutron energy. Spectrum energy shift due to beam hardening is studied on the basis of Monte Carlo N-Particle (MCNP) simulated data and the analytical data. Good agreement between MCNP and analytical values has been found. Indeed, the beam-hardening effect is well supported in the proposed method. A correction procedure is developed to correct the errors of beam-hardening effect in neutron transmission, and therefore for projection data correction. The effectiveness of this procedure is determined by its application in correcting reconstructed images. PMID:26609685

  8. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    PubMed Central

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis; Beddar, Sam; Beaulieu, Luc

    2011-01-01

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two different spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm3) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25×25 cm2, in general agreement with previously published Monte Carlo

  9. Stormwater Attenuation by Green Roofs

    NASA Astrophysics Data System (ADS)

    Sims, A.; O'Carroll, D. M.; Robinson, C. E.; Smart, C. C.

    2014-12-01

    Innovative municipal stormwater management technologies are urgently required in urban centers. Inadequate stormwater management can lead to excessive flooding, channel erosion, decreased stream baseflows, and degraded water quality. A major source of urban stormwater is unused roof space. Green roofs can be used as a stormwater management tool to reduce roof generated stormwater and generally improve the quality of runoff. With recent legislation in some North American cities, including Toronto, requiring the installation of green roofs on large buildings, research on the effectiveness of green roofs for stormwater management is important. This study aims to assess the hydrologic response of an extensive sedum green roof in London, Ontario, with emphasis on the response to large precipitation events that stress municipal stormwater infrastructure. A green roof rapidly reaches field capacity during large storm events and can show significantly different behavior before and after field capacity. At field capacity a green roof has no capillary storage left for retention of stormwater, but may still be an effective tool to attenuate peak runoff rates by transport through the green roof substrate. The attenuation of green roofs after field capacity is linked to gravity storage, where gravity storage is the water that is temporarily stored and can drain freely over time after field capacity has been established. Stormwater attenuation of a modular experimental green roof is determined from water balance calculations at 1-minute intervals. Data is used to evaluate green roof attenuation and the impact of field capacity on peak flow rates and gravity storage. In addition, a numerical model is used to simulate event based stormwater attenuation. This model is based off of the Richards equation and supporting theory of multiphase flow through porous media.

  10. Seismic attenuation estimation from zero-offset VSP data using seismic interferometry

    NASA Astrophysics Data System (ADS)

    Matsushima, Jun; Ali, Mohammed Y.; Bouchaala, Fateh

    2016-02-01

    Although seismic attenuation measurements have great potential to enhance our knowledge of physical conditions and rock properties, their application is limited because robust methods for improving both the resolution and accuracy of attenuation estimates have not yet been established. We propose attenuation estimation methods for zero-offset vertical seismic profile (VSP) data by combining seismic interferometry (SI) and the modified median frequency shift (MMFS) method developed for attenuation estimation using sonic waveform data. The configuration of zero-offset VSP data is redatumed to that of the sonic logging measurement by adopting two types of SI: deconvolution interferometry and crosscorrelation interferometry (CCI). Then, we can apply the MMFS method to the redatumed VSP data. Although the amplitude information estimated from CCI is biased, we propose a correction method for this bias to correctly estimate attenuation. First, to investigate the performance both in resolution and accuracy, we apply different trace separations to synthetic data with random noise at different signal-to-noise ratio levels. Second, we estimate the influence of residual reflection events after wavefield separation on attenuation estimation. The proposed methods provide more stable attenuation estimates in comparison with the spectral ratio method because the mean-median procedure suppresses random events and characteristic features caused by residual reflection events in spectral domain. Our numerical experiments also imply that the proposed methods can estimate scattering attenuation values even if frequency components are not changed between the two receiver depths. Finally, by preliminarily applying the proposed methods to field VSP data, we demonstrate the advantages of the proposed method in the resolution and stability of attenuation estimates and these observations correlate with those of numerical tests.

  11. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  12. Statistics of rain-rate estimates for a single attenuating radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1976-01-01

    The effects of fluctuations in return power and the rain-rate/reflectivity relationship, are included in the estimates, as well as errors introduced in the attempt to recover the unattenuated return power. In addition to the Hitschfeld-Bordan correction, two alternative techniques are considered. The performance of the radar is shown to be dependent on the method by which attenuation correction is made.

  13. Microwave Switching and Attenuation with Superconductors.

    NASA Astrophysics Data System (ADS)

    Poulin, Grant Darcy

    1995-01-01

    The discovery of high temperature superconducting (HTS) materials having a critical temperature above the boiling point of liquid nitrogen has generated a large amount of interest in both the basic and applied scientific communities. Considerable research effort has been expended in developing HTS microwave devices, since thin film, passive, microwave components will likely be the first area to be successfully commercialized. This thesis describes a new thin film HTS microwave device that can be operated as a switch or as a continuously variable attenuator. It is well suited for low power analog signal control applications and can easily be integrated with other HTS devices. Due to its small size and mass, the device is expected to find application as a receiver protection switch or as an automatic gain control element, both used in satellite communications receivers. The device has a very low insertion loss, and the isolation in the OFF state is continuously variable to 25 dB. With minor modifications, an isolation exceeding 50 dB is readily achievable. A patent application for the device has been filed, with the patent rights assigned to COM DEV. The device is based on an unusual non-linear response in HTS materials. Under a non-zero DC voltage bias, the current through a superconducting bridge is essentially voltage independent. We have proposed a thermal instability to account for this behaviour. Thermal modelling in conjunction with direct temperature measurements were used to confirm the validity of the model. We have developed a detailed model explaining the microwave response of the device. The model accurately predicts the microwave attenuation as a function of the applied DC control voltage and fully explains the device operation. A key feature is that the device acts as a pure resistive element at microwave frequencies, with no reactance. The resistance is continuously variable, controlled by the DC bias voltage. This distinguishes it from a PIN diode

  14. Global orbit corrections

    SciTech Connect

    Symon, K.

    1987-11-01

    There are various reasons for preferring local (e.g., three bump) orbit correction methods to global corrections. One is the difficulty of solving the mN equations for the required mN correcting bumps, where N is the number of superperiods and m is the number of bumps per superperiod. The latter is not a valid reason for avoiding global corrections, since, we can take advantage of the superperiod symmetry to reduce the mN simultaneous equations to N separate problems, each involving only m simultaneous equations. Previously, I have shown how to solve the general problem when the machine contains unknown magnet errors of known probability distribution; we made measurements of known precision of the orbit displacements at a set of points, and we wish to apply correcting bumps to minimize the weighted rms orbit deviations. In this report, we will consider two simpler problems, using similar methods. We consider the case when we make M beam position measurements per superperiod, and we wish to apply an equal number M of orbit correcting bumps to reduce the measured position errors to zero. We also consider the problem when the number of correcting bumps is less than the number of measurements, and we wish to minimize the weighted rms position errors. We will see that the latter problem involves solving equations of a different form, but involving the same matrices as the former problem.

  15. Reduction of attenuation effects in 3D transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Frimmel, Hans; Acosta, Oscar; Fenster, Aaron; Ourselin, Sébastien

    2007-03-01

    Ultrasound (US) is one of the most used imaging modalities today as it is cheap, reliable, safe and widely available. There are a number of issues with US images in general. Besides reflections which is the basis of ultrasonic imaging, other phenomena such as diffraction, refraction, attenuation, dispersion and scattering appear when ultrasound propagates through different tissues. The generated images are therefore corrupted by false boundaries, lack of signal for surface tangential to ultrasound propagation, large amount of noise giving rise to local properties, and anisotropic sampling space complicating image processing tasks. Although 3D Transrectal US (TRUS) probes are not yet widely available, within a few years they will likely be introduced in hospitals. Therefore, the improvement of automatic segmentation from 3D TRUS images, making the process independent of human factor is desirable. We introduce an algorithm for attenuation correction, reducing enhancement/shadowing effects and average attenuation effects in 3D US images, taking into account the physical properties of US. The parameters of acquisition such as logarithmic correction are unknown, therefore no additional information is available to restore the image. As the physical properties are related to the direction of each US ray, the 3D US data set is resampled into cylindrical coordinates using a fully automatic algorithm. Enhancement and shadowing effects, as well as average attenuation effects, are then removed with a rescaling process optimizing simultaneously in and perpendicular to the US ray direction. A set of tests using anisotropic diffusion are performed to illustrate the improvement in image quality, where well defined structures are visible. The evolution of both the entropy and the contrast show that our algorithm is a suitable pre-processing step for segmentation tasks.

  16. Contrast image correction method

    NASA Astrophysics Data System (ADS)

    Schettini, Raimondo; Gasparini, Francesca; Corchs, Silvia; Marini, Fabrizio; Capra, Alessandro; Castorina, Alfio

    2010-04-01

    A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.

  17. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    SciTech Connect

    Huang, Chuan; Petibon, Yoann; Ouyang, Jinsong; El Fakhri, Georges; Reese, Timothy G.; Ahlman, Mark A.; Bluemke, David A.

    2015-02-15

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PET using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide

  18. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    PubMed Central

    Huang, Chuan; Petibon, Yoann; Ouyang, Jinsong; Reese, Timothy G.; Ahlman, Mark A.; Bluemke, David A.; El Fakhri, Georges

    2015-01-01

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PET using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide

  19. Assessment of the severity of partial volume effects and the performance of two template-based correction methods in a SPECT/CT phantom experiment.

    PubMed

    Shcherbinin, S; Celler, A

    2011-08-21

    We investigated the severity of partial volume effects (PVE), which may occur in SPECT/CT studies, and the performance of two template-based correction techniques. A hybrid SPECT/CT system was used to scan a thorax phantom that included lungs, a heart insert and six cylindrical containers of different sizes and activity concentrations. This phantom configuration allowed us to have non-uniform background activity and a combination of spill-in and spill-out effects for several compartments. The reconstruction with corrections for attenuation, scatter and resolution loss but not PVE correction accurately recovered absolute activities in large organs. However, the activities inside segmented 17-120 mL containers were underestimated by 20%-40%. After applying our PVE correction to the data pertaining to six small containers, the accuracy of the recovered total activity improved with errors ranging between 3% and 22% (non-iterative method) and between 5% and 15% (method with an iteratively updated background activity). While the non-iterative template-based algorithm demonstrated slightly better accuracy for cases with less severe PVE than the iterative algorithm, it underperformed in situations with considerable spill out and/or mixture of spill-in and spill-out effects.

  20. ENHANCEMENTS TO NATURAL ATTENUATION: SELECTED CASE STUDIES

    SciTech Connect

    Vangelas, K; W. H. Albright, W; E. S. Becvar, E; C. H. Benson, C; T. O. Early, T; E. Hood, E; P. M. Jardine, P; M. Lorah, M; E. Majche, E; D. Major, D; W. J. Waugh, W; G. Wein, G; O. R. West, O

    2007-05-15

    In 2003 the US Department of Energy (DOE) embarked on a project to explore an innovative approach to remediation of subsurface contaminant plumes that focused on introducing mechanisms for augmenting natural attenuation to achieve site closure. Termed enhanced attenuation (EA), this approach has drawn its inspiration from the concept of monitored natural attenuation (MNA).

  1. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Gary Mavko; Jack Dvorkin

    2002-01-01

    In Section 1 of this first report we will describe the work we are doing to collect and analyze rock physics data for the purpose of modeling seismic attenuation from other measurable quantities such as porosity, water saturation, clay content and net stress. This work and other empirical methods to be presented later, will form the basis for ''Q pseudo-well modeling'' that is a key part of this project. In Section 2 of this report, we will show the fundamentals of a new method to extract Q, dispersion, and attenuation from field seismic data. The method is called Gabor-Morlet time-frequency decomposition. This technique has a number of advantages including greater stability and better time resolution than spectral ratio methods.

  2. Chlorine signal attenuation in concrete.

    PubMed

    Naqvi, A A; Maslehuddin, M; ur-Rehman, Khateeb; Al-Amoudi, O S B

    2015-11-01

    The intensity of prompt gamma-ray was measured at various depths from chlorine-contaminated silica fume (SF) concrete slab concrete specimens using portable neutron generator-based prompt gamma-ray setup. The intensity of 6.11MeV chloride gamma-rays was measured from the chloride contaminated slab at distance of 15.25, 20.25, 25.25, 30.25 and 35.25cm from neutron target in a SF cement concrete slab specimens. Due to attenuation of thermal neutron flux and emitted gamma-ray intensity in SF cement concrete at various depths, the measured intensity of chlorine gamma-rays decreases non-linearly with increasing depth in concrete. A good agreement was noted between the experimental results and the results of Monte Carlo simulation. This study has provided useful experimental data for evaluating the chloride contamination in the SF concrete utilizing gamma-ray attenuation method.

  3. Site response and attenuation in the Puget Lowland, Washington State

    USGS Publications Warehouse

    Pratt, T.L.; Brocher, T.M.

    2006-01-01

    Simple spectral ratio (SSR) and horizontal-to-vertical (HN) site-response estimates at 47 sites in the Puget Lowland of Washington State document significant attenuation of 1.5- to 20-Hz shear waves within sedimentary basins there. Amplitudes of the horizontal components of shear-wave arrivals from three local earthquakes were used to compute SSRs with respect to the average of two bedrock sites and H/V spectral ratios with respect to the vertical component of the shear-wave arrivals at each site. SSR site-response curves at thick basin sites show peak amplifications of 2 to 6 at frequencies of 3 to 6 Hz, and decreasing spectra amplification with increasing frequency above 6 Hz. SSRs at nonbasin sites show a variety of shapes and larger resonance peaks. We attribute the spectral decay at frequencies above the amplification peak at basin sites to attenuation within the basin strata. Computing the frequency-independent, depth-dependent attenuation factor (Qs,int) from the SSR spectral decay between 2 and 20 Hz gives values of 5 to 40 for shallow sedimentary deposits and about 250 for the deepest sedimentary strata (7 km depth). H/V site responses show less spectral decay than the SSR responses but contain many of the same resonance peaks. We hypothesize that the H/V method yields a flatter response across the frequency spectrum than SSRs because the H/V reference signal (vertical component of the shear-wave arrivals) has undergone a degree of attenuation similar to the horizontal component recordings. Correcting the SSR site responses for attenuation within the basins by removing the spectral decay improves agreement between SSR and H/V estimates.

  4. Correcting Illumina data.

    PubMed

    Molnar, Michael; Ilie, Lucian

    2015-07-01

    Next-generation sequencing technologies revolutionized the ways in which genetic information is obtained and have opened the door for many essential applications in biomedical sciences. Hundreds of gigabytes of data are being produced, and all applications are affected by the errors in the data. Many programs have been designed to correct these errors, most of them targeting the data produced by the dominant technology of Illumina. We present a thorough comparison of these programs. Both HiSeq and MiSeq types of Illumina data are analyzed, and correcting performance is evaluated as the gain in depth and breadth of coverage, as given by correct reads and k-mers. Time and memory requirements, scalability and parallelism are considered as well. Practical guidelines are provided for the effective use of these tools. We also evaluate the efficiency of the current state-of-the-art programs for correcting Illumina data and provide research directions for further improvement.

  5. 75 FR 68405 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ...'' (Presidential Sig.) [FR Doc. C1-2010-27668 Filed 11-5-10; 8:45 am] Billing Code 1505-01-D ..., 2010--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction...

  6. 78 FR 73377 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    .... Drug Interdiction Assistance to the Government of Colombia''. (Presidential Sig.) [FR Doc. C1-2013...--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction In...

  7. Correcting Hubble Vision.

    ERIC Educational Resources Information Center

    Shaw, John M.; Sheahen, Thomas P.

    1994-01-01

    Describes the theory behind the workings of the Hubble Space Telescope, the spherical aberration in the primary mirror that caused a reduction in image quality, and the corrective device that compensated for the error. (JRH)

  8. Method of absorbance correction in a spectroscopic heating value sensor

    SciTech Connect

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  9. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET

    NASA Astrophysics Data System (ADS)

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the ‘almost’ noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  10. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET.

    PubMed

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the 'almost' noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  11. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  12. moco: Fast Motion Correction for Calcium Imaging

    PubMed Central

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  13. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  14. Respiration correction by clustering in ultrasound images

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong

    2016-03-01

    Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.

  15. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  16. Adaptable DC offset correction

    NASA Technical Reports Server (NTRS)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)

    2009-01-01

    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  17. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  18. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  19. Accurate ab initio vibrational energies of methyl chloride

    SciTech Connect

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  20. A prototype piecewise-linear dynamic attenuator

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Peng, Mark V.; May, Christopher A.; Shunhavanich, Picha; Fleischmann, Dominik; Pelc, Norbert J.

    2016-07-01

    The piecewise-linear dynamic attenuator has been proposed as a mechanism in CT scanning for personalizing the x-ray illumination on a patient- and application-specific basis. Previous simulations have shown benefits in image quality, scatter, and dose objectives. We report on the first prototype implementation. This prototype is reduced in scale and speed and is integrated into a tabletop CT system with a smaller field of view (25 cm) and longer scan time (42 s) compared to a clinical system. Stainless steel wedges were machined and affixed to linear actuators, which were in turn held secure by a frame built using rapid prototyping technologies. The actuators were computer-controlled, with characteristic noise of about 100 microns. Simulations suggest that in a clinical setting, the impact of actuator noise could lead to artifacts of only 1 HU. Ring artifacts were minimized by careful design of the wedges. A water beam hardening correction was applied and the scan was collimated to reduce scatter. We scanned a 16 cm water cylinder phantom as well as an anthropomorphic pediatric phantom. The artifacts present in reconstructed images are comparable to artifacts normally seen with this tabletop system. Compared to a flat-field reference scan, increased detectability at reduced dose is shown and streaking is reduced. Artifacts are modest in our images and further refinement is possible. Issues of mechanical speed and stability in the challenging clinical CT environment will be addressed in a future design.

  1. A prototype piecewise-linear dynamic attenuator.

    PubMed

    Hsieh, Scott S; Peng, Mark V; May, Christopher A; Shunhavanich, Picha; Fleischmann, Dominik; Pelc, Norbert J

    2016-07-01

    The piecewise-linear dynamic attenuator has been proposed as a mechanism in CT scanning for personalizing the x-ray illumination on a patient- and application-specific basis. Previous simulations have shown benefits in image quality, scatter, and dose objectives. We report on the first prototype implementation. This prototype is reduced in scale and speed and is integrated into a tabletop CT system with a smaller field of view (25 cm) and longer scan time (42 s) compared to a clinical system. Stainless steel wedges were machined and affixed to linear actuators, which were in turn held secure by a frame built using rapid prototyping technologies. The actuators were computer-controlled, with characteristic noise of about 100 microns. Simulations suggest that in a clinical setting, the impact of actuator noise could lead to artifacts of only 1 HU. Ring artifacts were minimized by careful design of the wedges. A water beam hardening correction was applied and the scan was collimated to reduce scatter. We scanned a 16 cm water cylinder phantom as well as an anthropomorphic pediatric phantom. The artifacts present in reconstructed images are comparable to artifacts normally seen with this tabletop system. Compared to a flat-field reference scan, increased detectability at reduced dose is shown and streaking is reduced. Artifacts are modest in our images and further refinement is possible. Issues of mechanical speed and stability in the challenging clinical CT environment will be addressed in a future design. PMID:27284705

  2. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  3. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  4. Videometric terminal guidance method and system for UAV accurate landing

    NASA Astrophysics Data System (ADS)

    Zhou, Xiang; Lei, Zhihui; Yu, Qifeng; Zhang, Hongliang; Shang, Yang; Du, Jing; Gui, Yang; Guo, Pengyu

    2012-06-01

    We present a videometric method and system to implement terminal guidance for Unmanned Aerial Vehicle(UAV) accurate landing. In the videometric system, two calibrated cameras attached to the ground are used, and a calibration method in which at least 5 control points are applied is developed to calibrate the inner and exterior parameters of the cameras. Cameras with 850nm spectral filter are used to recognize a 850nm LED target fixed on the UAV which can highlight itself in images with complicated background. NNLOG (normalized negative laplacian of gaussian) operator is developed for automatic target detection and tracking. Finally, 3-D position of the UAV with high accuracy can be calculated and transfered to control system to direct UAV accurate landing. The videometric system can work in the rate of 50Hz. Many real flight and static accuracy experiments demonstrate the correctness and veracity of the method proposed in this paper, and they also indicate the reliability and robustness of the system proposed in this paper. The static accuracy experiment results show that the deviation is less-than 10cm when target is far from the cameras and lessthan 2cm in 100m region. The real flight experiment results show that the deviation from DGPS is less-than 20cm. The system implement in this paper won the first prize in the AVIC Cup-International UAV Innovation Grand Prix, and it is the only one that achieved UAV accurate landing without GPS or DGPS.

  5. Results of the SDCS (Special Data Collection System) attenuation experiment. Technical report

    SciTech Connect

    Der, Z.A.; McElfresh, T.W.; O'Donnell, A.

    1981-10-30

    Investigation of teleseismic arrivals at test sites in the western United States (WUS), a site on the Canadian shield and two sites in the northeastern United States revealed marked differences in mantle attenuation among these sites. All sites in the WUS show high attenuation in the underlying mantle, the sites in the northeastern U.S. appear to be intermediate between the WUS and the shield sites. This pattern fits well into the results of broader regional studies of amplitude anomalies, and spectral variations in both P and S waves. The high frequency content of teleseismic arrivals cannot be reconciled with the results of long period attenuation studies unless a frequency dependence of Q is assumed in the Earth. Preliminary curves for t vs. frequency are presented for shield and shield-to-tectonic type paths. These results demonstrate that yield estimates of explosions in different tectonic environments have to be corrected for mantle attenuation.

  6. Evaluation of Monitoring Approaches for Natural Attenuation

    NASA Astrophysics Data System (ADS)

    Roll, L. L.; Labolle, E. M.; Fogg, G. E.

    2008-12-01

    Monitored natural attenuation (MNA) can be a useful alternative to active remediation, however, firm conclusions regarding effectiveness of MNA may be elusive because of multiple processes that can produce similar, apparent trends in chemical concentrations in the heterogeneous subsurface. Current monitoring approaches need to be critically evaluated for typical field settings, such as heterogeneous alluvial aquifer systems, because spatially varying aquifer properties create non uniform flow fields that greatly influence transport processes, producing complex plume behavior that may not be adequately depicted by monitoring networks. Highly-resolved simulations of flow and conservative transport in a typical alluvial aquifer system facilitate a critical review of three monitoring approaches including estimation of mass balance from sampling along the plume centerline, estimation of mass balance from fine grid sampling, and estimation of mass flux from sampling along cross sections. The simulation procedure involves generation of unconditional transition-probability fields of hydrofacies distributions, simulation of steady state flow followed by simulation of conservative transport using a highly accurate random walk particle method (RWHET). The results elucidate limitations and potential pitfalls of the monitoring methods and use of simple models in typically heterogeneous systems. For example, simulations show that because of the system complexity, apparent concentration trends in space and time can be falsely attributed to biodegradation when none is occurring if simplistic models are used to interpret the data. Measured concentrations alone are likely insufficient to judge effectiveness of MNA.

  7. Joint reconstruction of activity and attenuation map using LM SPECT emission data

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Clarkson, Eric; Kupinski, Matthew A.; Barrett, Harrison H.

    2013-03-01

    Attenuation and scatter correction in single photon emission computed tomography (SPECT) imaging often requires a computed tomography (CT) scan to compute the attenuation map of the patient. This results in increased radiation dose for the patient, and also has other disadvantages such as increased costs and hardware complexity. Attenuation in SPECT is a direct consequence of Compton scattering, and therefore, if the scattered photon data can give information about the attenuation map, then the CT scan may not be required. In this paper, we investigate the possibility of joint reconstruction of the activity and attenuation map using list- mode (LM) SPECT emission data, including the scattered-photon data. We propose a path-based formalism to process scattered-photon data. Following this, we derive analytic expressions to compute the Craḿer-Rao bound (CRB) of the activity and attenuation map estimates, using which, we can explore the fundamental limit of information-retrieval capacity from LM SPECT emission data. We then suggest a maximum-likelihood (ML) scheme that uses the LM emission data to jointly reconstruct the activity and attenuation map. We also propose an expectation-maximization (EM) algorithm to compute the ML solution.

  8. Geological Corrections in Gravimetry

    NASA Astrophysics Data System (ADS)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  9. Natural and enhanced attenuation of metals

    SciTech Connect

    Rouse, J.V.; Pyrih, R.Z.

    1996-12-31

    The ability of natural earthen materials to attenuate the movement of contamination can be quantified in relatively simple geochemical experiments. In addition, the ability of subsurface material to attenuate potential contaminants can be enhanced through modifications to geochemical parameters such as pH or redox conditions. Such enhanced geochemical attenuation has been demonstrated at a number of sites to be a cost-effective alternative to conventional pump and treat operations. This paper describes the natural attenuation reactions which occur in the subsurface, and the way to quantify such attenuation. It also introduces the concept of enhanced geochemical attenuation, wherein naturally-occurring geochemical reactions can be used to achieve in situ fixation. The paper presents examples where such natural and enhanced attenuation have been implemented as a part of an overall remedy.

  10. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  11. Computing the Seismic Attenuation in Complex Porous Materials

    NASA Astrophysics Data System (ADS)

    Masson, Yder Jean

    produces maps of the spatial distribution of Young's modulus. These maps are then used in combination with the aforementioned numerical methods to compute accurately the attenuation as a function of frequency associated with real rock samples.

  12. Two-dimensional acoustic attenuation mapping of high-temperature interstitial ultrasound lesions

    NASA Astrophysics Data System (ADS)

    Tyréus, Per Daniel; Diederich, Chris

    2004-02-01

    Acoustic attenuation change in biological tissues with temperature and time is a critical parameter for interstitial ultrasound thermal therapy treatment planning and applicator design. Earlier studies have not fully explored the effects on attenuation of temperatures (75-95 °C) and times (5-15 min) common in interstitial ultrasound treatments. A scanning transmission ultrasound attenuation measurement system was devised and used to measure attenuation changes due to these types of thermal exposures. To validate the approach and to loosely define expected values, attenuation changes in degassed ex vivo bovine liver, bovine brain and chicken muscle were measured after 10 min exposures in a water bath to temperatures up to 90 °C. Maximum attenuation increases of approximately seven, four and two times the values at 37 °C were measured for the three tissue models at 5 MHz. By using the system to scan over lesions produced using interstitial ultrasound applicators, 2D contour maps of attenuation were produced. Attenuation profiles measured through the centrelines of lesions showed that attenuation was highest close to the applicator and decreased with radial distance, as expected with decreasing thermal exposure. Attenuation values measured in profiles through lesions were also shown to decrease with reduced power to the applicator. Attenuation increases in 2D maps of interstitial ultrasound lesions in ex vivo chicken breast, bovine liver and bovine brain were correlated with visible tissue coagulation. While regions of visible coagulation corresponded well to contours of attenuation increase in liver and chicken, no lesion was visible under the same experimental conditions in brain, due primarily to the heterogeneity of the tissue. Acoustic and biothermal simulations were employed to show that attenuation models taking into account these attenuation changes at higher temperatures and longer times were better able to fit experimental data than previous models. These

  13. Aureolegraph internal scattering correction.

    PubMed

    DeVore, John; Villanucci, Dennis; LePage, Andrew

    2012-11-20

    Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds.

  14. Aureolegraph internal scattering correction.

    PubMed

    DeVore, John; Villanucci, Dennis; LePage, Andrew

    2012-11-20

    Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299

  15. Metal artifacts correction in cone-beam CT bone imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Ning, Ruola; Conover, David

    2007-03-01

    Cone-beam CT (CBCT) technique is needed by orthopaedists in their new studies to monitor bone volume growth and blood vessel growth of structural bone grafts used in reconstruction surgery. However, titanium plate and screws, which are commonly used to connect bone grafts to host bones, can cause severe streaking artifacts and shading artifact in the reconstructed images due to their high attenuation of x-rays. These metal artifacts will distort the information of the bone and cause difficulties when measuring bone volume growth and the inside blood vessel growth. To solve this problem and help orthopaedists quantitatively record the growth of bone grafts, we present a three-dimensional metal artifact correction technique to correct the streaking artifacts generated by titanium implants. In this project not only the artifacts need to be corrected but also the correct information of the bone is required in the image for the quantitative measurements. Both phantom studies and animal studies were conducted to test this correction method. Images without metal correction and images with metal correction were compared together, as well as the reference bone images acquired without metal. It's shown the streaking and shading artifacts were greatly reduced after metal correction. The accuracy of bone volume measurements was also greatly increased by 79% for phantom studies and 53% for animal studies.

  16. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-04-01

    In this report we will show some new Q related seismic attributes on the Burlington-Seitel data set. One example will be called Energy Absorption Attribute (EAA) and is based on a spectral analysis. The EAA algorithm is designed to detect a sudden increase in the rate of exponential decay in the relatively higher frequency portion of the spectrum. In addition we will show results from a hybrid attribute that combines attenuation with relative acoustic impedance to give a better indication of commercial gas saturation.

  17. Correction coil cable

    DOEpatents

    Wang, S.T.

    1994-11-01

    A wire cable assembly adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies for the Superconducting Super Collider. The correction coil cables have wires collected in wire array with a center rib sandwiched therebetween to form a core assembly. The core assembly is surrounded by an assembly housing having an inner spiral wrap and a counter wound outer spiral wrap. An alternate embodiment of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable on a particle tube in a particle tube assembly. 7 figs.

  18. Corrections and clarifications.

    PubMed

    1994-11-11

    The 1994 and 1995 federal science budget appropriations for two of the activities were inadvertently transposed in a table that accompanied the article "Hitting the President's target is mixed blessing for agencies" by Jeffrey Mervis (News & Comment, 14 Oct., p. 211). The correct figures for Defense Department spending on university research are $1.460 billion in 1994 and $1.279 billion in 1995; for research and development at NASA, the correct figures are $9.455 billion in 1994 and $9.824 billion in 1995.

  19. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.

  20. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  1. DNA barcode data accurately assign higher spider taxa.

    PubMed

    Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the

  2. DUST ATTENUATION IN HIGH REDSHIFT GALAXIES: 'DIAMONDS IN THE SKY'

    SciTech Connect

    Scoville, Nick; Capak, Peter; Steinhardt, Charles; Faisst, Andreas; Kakazu, Yuko; Li, Gongjie

    2015-02-20

    We use observed optical to near-infrared spectral energy distributions (SEDs) of 266 galaxies in the COSMOS survey to derive the wavelength dependence of the dust attenuation at high redshift. All of the galaxies have spectroscopic redshifts in the range z = 2-6.5. The presence of the C IV absorption feature, indicating that the rest-frame UV-optical SED is dominated by OB stars, is used to select objects for which the intrinsic, unattenuated spectrum has a well-established shape. Comparison of this intrinsic spectrum with the observed broadband photometric SED then permits derivation of the wavelength dependence of the dust attenuation. The derived dust attenuation curve is similar in overall shape to the Calzetti curve for local starburst galaxies. We also see the 2175 Å bump feature which is present in the Milky Way and Large Magellanic Cloud extinction curves but not seen in the Calzetti curve. The bump feature is commonly attributed to graphite or polycyclic aromatic hydrocarbons. No significant dependence is seen with redshift between sub-samples at z = 2-4 and z = 4-6.5. The 'extinction' curve obtained here provides a firm basis for color and extinction corrections of high redshift galaxy photometry.

  3. Imaging Rayleigh wave attenuation with USArray

    NASA Astrophysics Data System (ADS)

    Bao, Xueyang; Dalton, Colleen A.; Jin, Ge; Gaherty, James B.; Shen, Yang

    2016-07-01

    The EarthScope USArray provides an opportunity to obtain detailed images of the continental upper mantle at an unprecedented scale. The majority of mantle models derived from USArray data to date contain spatial variations in seismic-wave speed; however, in many cases these data sets do not by themselves allow a non-unique interpretation. Joint interpretation of seismic attenuation and velocity models can improve upon the interpretations based only on velocity and provide important constraints on the temperature, composition, melt content, and volatile content of the mantle. The surface wave amplitudes that constrain upper-mantle attenuation are sensitive to factors in addition to attenuation, including the earthquake source excitation, focusing and defocusing by elastic structure, and local site amplification. Because of the difficulty of isolating attenuation from these other factors, little is known about the attenuation structure of the North American upper mantle. In this study, Rayleigh wave traveltime and amplitude in the period range 25-100 s are measured using an interstation cross-correlation technique, which takes advantage of waveform similarity at nearby stations. Several estimates of Rayleigh wave attenuation and site amplification are generated at each period, using different approaches to separate the effects of attenuation and local site amplification on amplitude. It is assumed that focusing and defocusing effects can be described by the Laplacian of the traveltime field. All approaches identify the same large-scale patterns in attenuation, including areas where the attenuation values are likely contaminated by unmodelled focusing and defocusing effects. Regionally averaged attenuation maps are constructed after removal of the contaminated attenuation values, and the variations in intrinsic shear attenuation that are suggested by these Rayleigh wave attenuation maps are explored.

  4. Source self-attenuation in ionization chamber measurements of (57)Co solutions.

    PubMed

    Cessna, Jeffrey T; Golas, Daniel B; Bergeron, Denis E

    2016-03-01

    Source self-attenuation for solutions of (57)Co of varying density and carrier concentration was measured in nine re-entrant ionization chambers maintained at NIST. The magnitude of the attenuation must be investigated to determine whether a correction is necessary in the determination of the activity of a source that differs in composition from the source used to calibrate the ionization chamber. At our institute, corrections are currently made in the measurement of (144)Ce, (109)Cd, (67)Ga, (195)Au, (166)Ho, (177)Lu, and (153)Sm. This work presents the methods used as recently applied to (57)Co. A range of corrections up to 1% were calculated for dilute to concentrated HCl at routinely used carrier concentrations.

  5. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  6. Scatter corrections for cone beam optical CT

    NASA Astrophysics Data System (ADS)

    Olding, Tim; Holmes, Oliver; Schreiner, L. John

    2009-05-01

    Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.

  7. Precise and accurate isotopic measurements using multiple-collector ICPMS

    NASA Astrophysics Data System (ADS)

    Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce

    2004-06-01

    New techniques of isotopic measurements by a new generation of mass spectrometers equipped with an inductively-coupled-plasma source, a magnetic mass filter, and multiple collection (MC-ICPMS) are quickly developing. These techniques are valuable because of (1) the ability of ICP sources to ionize virtually every element in the periodic table, and (2) the large sample throughout. However, because of the complex trajectories of multiple ion beams produced in the plasma source whether from the same or different elements, the acquisition of precise and accurate isotopic data with this type of instrument still requires a good understanding of instrumental fractionation processes, both mass-dependent and mass-independent. Although physical processes responsible for the instrumental mass bias are still to be understood more fully, we here present a theoretical framework that allows for most of the analytical limitations to high precision and accuracy to be overcome. After a presentation of unifying phenomenological theory for mass-dependent fractionation in mass spectrometers, we show how this theory accounts for the techniques of standard bracketing and of isotopic normalization by a ratio of either the same or a different element, such as the use of Tl to correct mass bias on Pb. Accuracy is discussed with reference to the concept of cup efficiencies. Although these can be simply calibrated by analyzing standards, we derive a straightforward, very general method to calculate accurate isotopic ratios from dynamic measurements. In this study, we successfully applied the dynamic method to Nd and Pb as examples. We confirm that the assumption of identical mass bias for neighboring elements (notably Pb and Tl, and Yb and Lu) is both unnecessary and incorrect. We further discuss the dangers of straightforward standard-sample bracketing when chemical purification of the element to be analyzed is imperfect. Pooling runs to improve precision is acceptable provided the pooled

  8. Automatic correction of dental artifacts in PET/MRI

    PubMed Central

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-01-01

    Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104

  9. X-ray attenuation properties of stainless steel (u)

    SciTech Connect

    Wang, Lily L; Berry, Phillip C

    2009-01-01

    Stainless steel vessels are used to enclose solid materials for studying x-ray radiolysis that involves gas release from the materials. Commercially available stainless steel components are easily adapted to form a static or a dynamic condition to monitor the gas evolved from the solid materials during and after the x-ray irradiation. Experimental data published on the x-ray attenuation properties of stainless steel, however, are very scarce, especially over a wide range of x-ray energies. The objective of this work was to obtain experimental data that will be used to determine how a poly-energetic x-ray beam is attenuated by the stainless steel container wall. The data will also be used in conjunction with MCNP (Monte Carlos Nuclear Particle) modeling to develop an accurate method for determining energy absorbed in known solid samples contained in stainless steel vessels. In this study, experiments to measure the attenuation properties of stainless steel were performed for a range of bremsstrahlung x-ray beams with a maximum energy ranging from 150 keV to 10 MeV. Bremsstrahlung x-ray beams of these energies are commonly used in radiography of engineering and weapon components. The weapon surveillance community has a great interest in understanding how the x-rays in radiography affect short-term and long-term properties of weapon materials.

  10. Issues in Correctional Training and Casework. Correctional Monograph.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; Lawrenz, Pam, Ed.

    The eight papers contained in this monograph were drawn from two national meetings on correctional training and casework. Titles and authors are: "The Challenge of Professionalism in Correctional Training" (Michael J. Gilbert); "A New Perspective in Correctional Training" (Jack Lewis); "Reasonable Expectations in Correctional Officer Training:…

  11. Magnetoelectric Composite Based Microwave Attenuator

    NASA Astrophysics Data System (ADS)

    Tatarenko, A. S.; Srinivasan, G.

    2005-03-01

    Ferrite-ferroelectric composites are magnetoelectric (ME) due to their response to elastic and electromagnetic force fields. The ME composites are characterized by tensor permittivity, permeability and ME susceptibility. The unique combination of magnetic, electrical, and ME interactions, therefore, opens up the possibility of electric field tunable ferromagnetic resonance (FMR) based devices [1]. Here we discuss an ME attenuator operating at 9.3 GHz based on FMR in a layered sample consisting of lead magnesium niobate-lead titanate bonded to yttrium iron garnet (YIG) film on a gadolinium gallium garnet substrate. Electrical tuning is realized with the application of a control voltage due to ME effect; the shift is 0-15 Oe as E is increased from 0 to 3 kV/cm. If the attenuator is operated at FMR, the corresponding insertion loss will range from 25 dB to 2 dB. 1. S. Shastry and G. Srinivasan, M.I. Bichurin, V.M. Petrov, A.S. Tatarenko. Phys. Rev. B, 70 064416 (2004). - supported by grants the grants from the National Science Foundation (DMR-0302254), from Russian Ministry of Education (Å02-3.4-278) and from Universities of Russia Foundation (UNR 01.01.026).

  12. Space charge stopband correction

    SciTech Connect

    Huang, Xiaobiao; Lee, S.Y.; /Indiana U.

    2005-09-01

    It is speculated that the space charge effect cause beam emittance growth through the resonant envelope oscillation. Based on this theory, we propose an approach, called space charge stopband correction, to reduce such emittance growth by compensation of the half-integer stopband width of the resonant oscillation. It is illustrated with the Fermilab Booster model.

  13. Counselor Education for Corrections.

    ERIC Educational Resources Information Center

    Parsigian, Linda

    Counselor education programs most often prepare their graduates to work in either a school setting, anywhere from the elementary level through higher education, or a community agency. There is little indication that counselor education programs have seriously undertaken the task of training counselors to enter the correctional field. If…

  14. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.

  15. 75 FR 68409 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Documents#0;#0; ] Presidential Determination No. 2010-14 of September 3, 2010--Unexpected Urgent Refugee And... on page 67015 in the issue of Monday, November 1, 2010, make the following correction: On page 67015, the Presidential Determination number should read ``2010-14'' (Presidential Sig.) [FR Doc....

  16. 75 FR 68407 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Documents#0;#0; ] Presidential Determination No. 2010-12 of August 26, 2010--Unexpected Urgent Refugee and... beginning on page 67013 in the issue of Monday, November 1, 2010, make the following correction: On page 67013, the Presidential Determination number should read ``2010-12'' (Presidential Sig.) [FR Doc....

  17. Assessing aerobic natural attenuation of trichloroethene at four DOE sites

    SciTech Connect

    Koelsch, Michael C.; Starr, Robert C.; Sorenson, Jr., Kent S.

    2005-03-01

    A 3-year Department of Energy Environmental Science Management Program (EMSP) project is currently investigating natural attenuation of trichloroethane (TCE) in aerobic groundwater. This presentation summarizes the results of a screening process to identify TCE plumes at DOE facilities that are suitable for assessing the rate of TCE cometabolism under aerobic conditions. In order to estimate aerobic degradation rates, plumes had to meet the following criteria: TCE must be present in aerobic groundwater, a conservative co-contaminant must be present and have approximately the same source as TCE, and the groundwater velocity must be known. A total of 127 TCE plumes were considered across 24 DOE sites. The four sites retained for the assessment were: (1) Brookhaven National Laboratory, OU III; (2) Paducah Gaseous Diffusion Plant, Northwest Plume; (3) Rocky Flats Environmental Technology Site, Industrialized Area--Southwest Plume and 903 Pad South Plume; and (4) Savannah River Site, A/M Area Plume. For each of these sites, a co-contaminant derived from the same source area as TCE was used as a nonbiodegrading tracer. The tracer determined the extent to which concentration decreases in the plume can be accounted for solely by abiotic processes such as dispersion and dilution. Any concentration decreases not accounted for by these processes must be explained by some other natural attenuation mechanism. Thus, ''half-lives'' presented herein are in addition to attenuation that occurs due to hydrologic mechanisms. This ''tracer-corrected method'' has previously been used at the DOE's Idaho National Engineering and Environmental Laboratory in conjunction with other techniques to document the occurrence of intrinsic aerobic cometabolism. Application of this method to other DOE sites is the first step to determining whether this might be a significant natural attenuation mechanism on a broader scale. Application of the tracer-corrected method to data from the Brookhaven

  18. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  19. Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins

    PubMed Central

    Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso

    2016-01-01

    The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211

  20. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  1. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  2. Fast and accurate determination of modularity and its effect size

    NASA Astrophysics Data System (ADS)

    Treviño, Santiago, III; Nyberg, Amy; Del Genio, Charo I.; Bassler, Kevin E.

    2015-02-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős-Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  3. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  4. Advanced reconstruction of attenuation maps using SPECT emission data only

    NASA Astrophysics Data System (ADS)

    Salomon, André; Goedicke, Andreas; Aach, Til

    2009-02-01

    Today, attenuation corrected SPECT, typically performed using CT or Gadolinium line source based transmission scans, is more and more becoming standard in many medical applications. Moreover, the information about the material density distribution provided by these scans is key for other artifact compensation approaches in advanced SPECT reconstruction. Major drawbacks of these approaches are the additional patient radiation and hardware/maintenance costs as well as the additional workflow effort, e.g. if the CT scans are not performed on a hybrid scanner. It has been investigated in the past, whether it is possible to recover this structural information solely from the SPECT scan data. However, the investigated methods often result in noticeable image artifacts due to cross-dependences between attenuation and activity distribution estimation. With the simultaneous reconstruction method presented in this paper, we aim to effectively prevent these typical cross-talk artifacts using a-priori known atlas information of a human body. At first, an initial 3D shape model is coarsely registered to the SPECT data using anatomical landmarks and each organ structure within the model is identified with its typical attenuation coefficient. During the iterative reconstruction based on a modified ML-EM scheme, the algorithm simultaneously adapts both, the local activity estimation and the 3D shape model in order to improve the overall consistency between measured and estimated sinogram data. By explicitly avoiding topology modifications resulting in a non-anatomical state, we ensure that the estimated attenuation map remains realistic. Several tests with simulated as well as real patient SPECT data were performed to test the proposed algorithm, which demonstrated reliable convergence behaviour in both cases. Comparing the achieved results with available reference data, an overall good agreement for both cold as well as hot activity regions could be observed (mean deviation: -5.98%).

  5. Gradient Artefact Correction and Evaluation of the EEG Recorded Simultaneously with fMRI Data Using Optimised Moving-Average

    PubMed Central

    Wu, Yan; Besseling, René M. H.; Lamerichs, Rolf; Aarts, Ronald M.

    2016-01-01

    Over the past years, coregistered EEG-fMRI has emerged as a powerful tool for neurocognitive research and correlated studies, mainly because of the possibility of integrating the high temporal resolution of the EEG with the high spatial resolution of fMRI. However, additional work remains to be done in order to improve the quality of the EEG signal recorded simultaneously with fMRI data, in particular regarding the occurrence of the gradient artefact. We devised and presented in this paper a novel approach for gradient artefact correction based upon optimised moving-average filtering (OMA). OMA makes use of the iterative application of a moving-average filter, which allows estimation and cancellation of the gradient artefact by integration. Additionally, OMA is capable of performing the attenuation of the periodic artefact activity without accurate information about MRI triggers. By using our proposed approach, it is possible to achieve a better balance than the slice-average subtraction as performed by the established AAS method, regarding EEG signal preservation together with effective suppression of the gradient artefact. Since the stochastic nature of the EEG signal complicates the assessment of EEG preservation after application of the gradient artefact correction, we also propose a simple and effective method to account for it. PMID:27446943

  6. Gradient Artefact Correction and Evaluation of the EEG Recorded Simultaneously with fMRI Data Using Optimised Moving-Average.

    PubMed

    Ferreira, José L; Wu, Yan; Besseling, René M H; Lamerichs, Rolf; Aarts, Ronald M

    2016-01-01

    Over the past years, coregistered EEG-fMRI has emerged as a powerful tool for neurocognitive research and correlated studies, mainly because of the possibility of integrating the high temporal resolution of the EEG with the high spatial resolution of fMRI. However, additional work remains to be done in order to improve the quality of the EEG signal recorded simultaneously with fMRI data, in particular regarding the occurrence of the gradient artefact. We devised and presented in this paper a novel approach for gradient artefact correction based upon optimised moving-average filtering (OMA). OMA makes use of the iterative application of a moving-average filter, which allows estimation and cancellation of the gradient artefact by integration. Additionally, OMA is capable of performing the attenuation of the periodic artefact activity without accurate information about MRI triggers. By using our proposed approach, it is possible to achieve a better balance than the slice-average subtraction as performed by the established AAS method, regarding EEG signal preservation together with effective suppression of the gradient artefact. Since the stochastic nature of the EEG signal complicates the assessment of EEG preservation after application of the gradient artefact correction, we also propose a simple and effective method to account for it. PMID:27446943

  7. A model for estimating ultrasound attenuation along the propagation path to the fetus from backscattered waveforms

    NASA Astrophysics Data System (ADS)

    Bigelow, Timothy A.; O'Brien, William D.

    2005-08-01

    Accurate estimates of the ultrasound pressure and/or intensity incident on the developing fetus on a patient-specific basis could improve the diagnostic potential of medical ultrasound by allowing the clinician to increase the transmit power while still avoiding the potential for harmful bioeffects. Neglecting nonlinear effects, the pressure/intensity can be estimated if an accurate estimate of the attenuation along the propagation path (i.e., total attenuation) can be obtained. Herein, a method for determining the total attenuation from the backscattered power spectrum from the developing fetus is proposed. The boundaries between amnion and either the fetus' skull or soft tissue are each modeled as planar impedance boundaries at an unknown orientation with respect to the sound beam. A mathematical analysis demonstrates that the normalized returned voltage spectrum from this model is independent of the planes orientation. Hence, the total attenuation can be estimated by comparing the location of the spectral peak in the reflection from the fetus to the location of the spectral peak in a reflection obtained from a rigid plane in a water bath. The independence of the attenuation estimate and plane orientation is then demonstrated experimentally using a Plexiglas plate, a rat's skull, and a tissue-mimicking phantom.

  8. Quasars as very-accurate clock synchronizers

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Goldstein, R. M.

    1975-01-01

    Quasars can be employed to synchronize global data communications, geophysical measurements, and atomic clocks. It is potentially two to three orders of magnitude better than presently-used Moon-bounce system. Comparisons between quasar and clock pulses are used to develop correction or synchronization factors for station clocks.

  9. Differential dust attenuation in CALIFA galaxies

    NASA Astrophysics Data System (ADS)

    Vale Asari, N.; Cid Fernandes, R.; Amorim, A. L.; Lacerda, E. A. D.; Schlickmann, M.; Wild, V.; Kennicutt, R. C.

    2016-06-01

    Dust attenuation has long been treated as a simple parameter in SED fitting. Real galaxies are, however, much more complicated: The measured dust attenuation is not a simple function of the dust optical depth, but depends strongly on galaxy inclination and the relative distribution of stars and dust. We study the nebular and stellar dust attenuation in CALIFA galaxies, and propose some empirical recipes to make the dust treatment more realistic in spectral synthesis codes. By adding optical recombination emission lines, we find better constraints for differential attenuation. Those recipes can be applied to unresolved galaxy spectra, and lead to better recovered star formation rates.

  10. Measurement of photon attenuation from different cardiac chambers

    SciTech Connect

    Keller, A.M.; Simon, T.R.; Malloy, C.R.; Dehmer, G.J.; Smitherman, T.C.

    1985-05-01

    Accounting for the attenuation (AT) of photons within cardiac chambers is crucial to accurate non-geometric volume determinations from gated blood pool scintigraphy. Previous techniques to determine AT for each patient have assumed an attenuation factor of 0.15/cm for Tc-99m, the value of water. To verify this, the authors determined the AT at various tissue distances (TD) in vivo. As a point source they used the balloon of a 5 French Swan-Ganz catheter which could reproducibly be filled with a constant amount of Tc-99m and could be placed within the left or right cardiac chambers. The exact location of the balloon, once inflated, and the TD from the balloon to the collimator of a small field-of-view Anger camera was determined using biplane orthogonal fluoroscopy. AT was determined by counting the inflated Tc-99m filled balloon in air and dividing that value by the counts of the same balloon within the heart. The authors positioned the balloon in the apex of the right and left ventricle, the ascending aorta and at the junction of the right atrium and inferior vena cava to give a total of 36 simultaneous observations of AT and TD. For our data the slope of the regression of the natural log of AT versus TD, when forced through zero, was 0.102, the calculated attenuation factor. The authors conclude that the attenuation factor that should be used for determining cardiac volumes with gated blood pool scans is 0.102/cm, not the value for water.

  11. Attenuation of the LG wave across the contiguous United States

    NASA Astrophysics Data System (ADS)

    Gallegos, Andrea Christina

    Lg waveforms recorded by EarthScope's Transportable Array (TA) are used to estimate Lg Q in the contiguous United States. Shear-wave crustal attenuation is calculated based on Lg spectral amplitudes filtered at several narrow bandwidths with central frequencies of 0.5, 1, 2, and 3 Hz. The two-station and reverse two-station techniques were used to calculate these Q values. 349 crustal earthquakes occurring from 2004 to 2015 and ranging from magnitude 3 to magnitude 6 were used in this study. The results show that the western U.S., an area ranging from 25°N to 50°N and from 125°W to 105°W is a primarily low Q (high attenuation) area, with isolated high Q (low attenuation) regions corresponding to the Colorado Plateau, the Rocky Mountains, the Columbia Plateau, and the Sierra Nevada Mountains. The central and eastern U.S., an area ranging from 105°W to 60°W, is found to be high Q overall, with isolated low Q areas along ft... Coastal Plain, the Reelfoot Rift, and the Wisconsin-Minnesota border region. A positive correlation between high heat flow, the presence of thick sediments, recent tectonic activity, and low Q is observed. Areas with low heat flow, thin sediment cover, and no recent tectonic activity were observed to have consistently high Q. Lg Q was found to have a power law type frequency dependence throughout the U.S., with an increase in central frequency resulting in an increase in Q. At higher frequencies, crustal attenuation is dominated by scattering. These new Lg tomography models are based on an unprecedented amount and coverage of data, providing improved accuracy and detail. This increase in detail can improve high frequency ground motion predictions of future large earthquakes for more accurate hazard assessment and improve overall understanding of the structure and assemblage of the contiguous United States.

  12. Empirical beam hardening correction (EBHC) for CT

    SciTech Connect

    Kyriakou, Yiannis; Meyer, Esther; Prell, Daniel; Kachelriess, Marc

    2010-10-15

    Purpose: Due to x-ray beam polychromaticity and scattered radiation, attenuation measurements tend to be underestimated. Cupping and beam hardening artifacts become apparent in the reconstructed CT images. If only one material such as water, for example, is present, these artifacts can be reduced by precorrecting the rawdata. Higher order beam hardening artifacts, as they result when a mixture of materials such as water and bone, or water and bone and iodine is present, require an iterative beam hardening correction where the image is segmented into different materials and those are forward projected to obtain new rawdata. Typically, the forward projection must correctly model the beam polychromaticity and account for all physical effects, including the energy dependence of the assumed materials in the patient, the detector response, and others. We propose a new algorithm that does not require any knowledge about spectra or attenuation coefficients and that does not need to be calibrated. The proposed method corrects beam hardening in single energy CT data. Methods: The only a priori knowledge entering EBHC is the segmentation of the object into different materials. Materials other than water are segmented from the original image, e.g., by using simple thresholding. Then, a (monochromatic) forward projection of these other materials is performed. The measured rawdata and the forward projected material-specific rawdata are monomially combined (e.g., multiplied or squared) and reconstructed to yield a set of correction volumes. These are then linearly combined and added to the original volume. The combination weights are determined to maximize the flatness of the new and corrected volume. EBHC is evaluated using data acquired with a modern cone-beam dual-source spiral CT scanner (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany), with a modern dual-source micro-CT scanner (TomoScope Synergy Twin, CT Imaging GmbH, Erlangen, Germany), and with a modern

  13. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  14. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  15. A highly accurate ab initio potential energy surface for methane

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  16. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement. PMID:27634258

  17. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  18. Investigation of the relationship between linear attenuation coefficients and CT Hounsfield units using radionuclides for SPECT.

    PubMed

    Brown, Saxby; Bailey, Dale L; Willowson, Kathy; Baldock, Clive

    2008-09-01

    This study has investigated the relationship between linear attenuation coefficients (mu) and Hounsfield units (HUs) for six materials covering the range of values found clinically. Narrow-beam mu values were measured by performing radionuclide transmission scans using (99m)Tc, (123)I, (131)I, (201)Tl and (111)In. The mu values were compared to published data. The relationships between mu and HU were determined. These relationships can be used to convert computed tomography (CT) images to mu-maps for single photon emission computed tomography (SPECT) attenuation correction. PMID:18662614

  19. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  20. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  1. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  2. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed.

  3. Aberration corrected emittance exchange

    NASA Astrophysics Data System (ADS)

    Nanni, E. A.; Graves, W. S.

    2015-08-01

    Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (rf) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by multiple orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dogleg emittance exchange setup with a five cell rf deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of an EEX line with emittances differing by four orders of magnitude, i.e., an initial transverse emittance of 1 pm-rad is exchanged with a longitudinal emittance of 10 nm-rad.

  4. Correction coil cable

    DOEpatents

    Wang, Sou-Tien

    1994-11-01

    A wire cable assembly (10, 310) adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies (532) for the superconducting super collider. The correction coil cables (10, 310) have wires (14, 314) collected in wire arrays (12, 312) with a center rib (16, 316) sandwiched therebetween to form a core assembly (18, 318 ). The core assembly (18, 318) is surrounded by an assembly housing (20, 320) having an inner spiral wrap (22, 322) and a counter wound outer spiral wrap (24, 324). An alternate embodiment (410) of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable (410) on a particle tube (733) in a particle tube assembly (732).

  5. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2002-10-01

    RSI has access to two synthetic seismic programs: Osiris seismic modeling system provided by Odegaard (Osiris) and synthetic seismic program, developed by SRB, implementing the Kennett method for normal incidence. Achieving virtually identical synthetic seismic traces from these different programs serves as cross-validation for both. The subsequent experiments have been performed with the Kennett normal incidence code because: We have access to the source code, which allowed us to easily control computational parameters and integrate the synthetics computations with our graphical and I/O systems. This code allows to perform computations and displays on a PC in MatLab or Octave environment, which is faster and more convenient. The normal incidence model allows us to exclude from the synthetic traces some of the physical effects that take place in 3-D models (like inhomogeneous waves) but have no relevance to the topic of our investigation, which is attenuation effects on seismic reflection and transmission.

  6. Applying Source and Path Corrections to Improve Discrimination in China,

    SciTech Connect

    Hartse, H. E.; Taylor, S. R.; Phillips, W. S.; Randall, G. E.

    1997-01-01

    Monitoring the Comprehensive Test Ban Treaty (CTBT) to magnitude levels below 4.0 will require use of regional seismic data recorded at distances of less than 2000 km. To improve regional discriminant performance we tested three different methods of correcting for path effects, and the third method includes a correction for source-scaling. We used regional recordings of broadband from stations in and near China. Our first method removes trends between phase ratios and physical parameters associated with each event-station path. This approach requires knowledge of the physical parameters along an event-station path, such as topography, basin thickness, and crustal thickness. Our second approach is somewhat more empirical. We examine spatial distributions of phase amplitudes after subtracting event magnitude and correcting for path distance. For a given station, phase, and frequency band, we grid and then smooth the magnitude-corrected and distance-corrected amplitudes to create a map representing a correction surface. We reference these maps to correct phase amplitudes prior to forming discrimination ratios. Our third approach is the most complicated, but also the most rigorous. For a given station and phase, we invert the spectra of a number of well-recorded earthquakes for source and path parameters. We then use the values obtained from the inversion to correct phase amplitudes for the effects of source size, distance, and attenuation. Finally,the amplitude residuals are gridded and smoothed to create a correction surface representing secondary path effects. We find that simple ratio- parameter corrections can improve discrimination performance along some paths (such as Kazakh Test Site (KTS) to WMQ), but for other paths (such as Lop Nor to AAK) the corrections are not beneficial. Our second method, the empirical path correction surfaces, improves discrimination performance for Lop Nor to AAK paths. Our third method, combined source and path corrections, has only

  7. Surgical correction of brachymetatarsia.

    PubMed

    Bartolomei, F J

    1990-02-01

    Brachymetatarsia describes the condition of an abnormally short metatarsal. Although the condition has been recorded since antiquity, surgical options to correct the deformity have been available for only two decades. Most published procedures involve metaphyseal lengthening with autogenous grafts from different donor sites. The author discusses one such surgical technique. In addition, the author proposes specific criteria for the objective diagnosis of brachymetatarsia. PMID:2406417

  8. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  9. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  10. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present.

  11. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902

  12. Accuracies and Contrasts of Models of the Diffusion-Weighted-Dependent Attenuation of the MRI Signal at Intermediate b-values

    PubMed Central

    Nicolas, Renaud; Sibon, Igor; Hiba, Bassem

    2015-01-01

    The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm2 in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue. PMID:26106263

  13. Reanalysis of S-to-P amplitude ratios for gross attenuation structure, Long Valley caldera, California

    SciTech Connect

    Sanders, C.O.

    1993-12-01

    Because of the strong interest in the magmatism and volcanism at Long Valley caldera, eastern California, and because of recent sifnigicant improvements in our knowledge of the caldera velocity structure and earthquake locations, I have reanalyzed the local-earthquake S-to-P amplitude-ratio data of Sanders (1984) for the gross three-dimensional attenuation structure of the upper 10 km of Long Valley caldera. The primary goals of the analysis are to provide more accurate constraints on the depths of the attenuation anomalies using improved knowledge of the ray locations and an objective inversion procedure. The new image of the high S wave attenuation anomaly in the west-central cadlera suggests that the top of the principal anomaly is at 7-km depth, which is 2 km deeper than previously determined. Because of poor resolution in much of the region, some of the data remain unsatisfied by the final attenuation model. This unmodeled data may imply unresolved attenuation anomalies, perhaps small anomalies in the kilometer or two just above the central-caldera anomaly and perhaps a larger anomaly at about 7-km depth in the northwest caldera or somewhere beneath the Mono Craters. The central-caldera S wave attenuation anomaly has a location similar to mapped regions of low teleseismic P wave velocity, crustal inflation, reduced density, and aseismicity, strongly suggesting magmatic association.

  14. A blind deconvolution method for attenuative materials based on asymmetrical Gaussian model.

    PubMed

    Jin, Haoran; Chen, Jian; Yang, Keji

    2016-08-01

    During propagation in attenuative materials, ultrasonic waves are distorted by frequency-dependent acoustic attenuation. As a result, reference signals for blind deconvolution in attenuative materials are asymmetrical and should be accurately estimated by considering attenuation. In this study, an asymmetrical Gaussian model is established to estimate the reference signals from these materials, and a blind deconvolution method based on this model is proposed. Based on the symmetrical Gaussian model, the asymmetrical one is formulated by adding an asymmetrical coefficient. Upon establishing the model, the reference signal for blind deconvolution is determined via maximum likelihood estimation, and the blind deconvolution is implemented with an orthogonal matching pursuit algorithm. To verify the feasibility of the established model, spectra of ultrasonic signals from attenuative polyethylene plates with different thicknesses are measured and estimated. The proposed blind deconvolution method is applied to the A-scan signal and B-scan image from attenuative materials. Results demonstrate that the proposed method is capable of separating overlapping echoes and therefore achieves a high temporal resolution. PMID:27586747

  15. Interventions to Correct Misinformation About Tobacco Products

    PubMed Central

    Cappella, Joseph N.; Maloney, Erin; Ophir, Yotam; Brennan, Emily

    2016-01-01

    In 2006, the U.S. District Court held that tobacco companies had “falsely and fraudulently” denied: tobacco causes lung cancer; environmental smoke endangers children’s respiratory systems; nicotine is highly addictive; low tar cigarettes were less harmful when they were not; they marketed to children; they manipulated nicotine delivery to enhance addiction; and they concealed and destroyed evidence to prevent accurate public knowledge. The courts required the tobacco companies to repair this misinformation. Several studies evaluated types of corrective statements (CS). We argue that most CS proposed (“simple CS’s”) will fall prey to “belief echoes” leaving affective remnants of the misinformation untouched while correcting underlying knowledge. Alternative forms for CS (“enhanced CS’s”) are proposed that include narrative forms, causal linkage, and emotional links to the receiver. PMID:27135046

  16. FIELD CORRECTION FACTORS FOR PERSONAL NEUTRON DOSEMETERS.

    PubMed

    Luszik-Bhadra, M

    2016-09-01

    A field-dependent correction factor can be obtained by comparing the readings of two albedo neutron dosemeters fixed in opposite directions on a polyethylene sphere to the H*(10) reading as determined with a thermal neutron detector in the centre of the same sphere. The work shows that the field calibration technique as used for albedo neutron dosemeters can be generalised for all kind of dosemeters, since H*(10) is a conservative estimate of the sum of the personal dose equivalents Hp(10) in two opposite directions. This result is drawn from reference values as determined by spectrometers within the EVIDOS project at workplace of nuclear installations in Europe. More accurate field-dependent correction factors can be achieved by the analysis of several personal dosimeters on a phantom, but reliable angular responses of these dosemeters need to be taken into account. PMID:26493946

  17. Refining atmospheric correction for aquatic remote spectroscopy

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.

    2015-12-01

    Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.

  18. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems. PMID:24058046

  19. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  20. Quantum-electrodynamics corrections in pionic hydrogen

    SciTech Connect

    Schlesser, S.; Le Bigot, E.-O.; Indelicato, P.; Pachucki, K.

    2011-07-15

    We investigate all pure quantum-electrodynamics corrections to the np{yields}1s, n=2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order {alpha}{sup 5}. These values are needed to extract an accurate strong interaction shift from experiment. Many small effects, such as second-order and double vacuum polarization contribution, proton and pion self-energies, finite size and recoil effects are included with exact mass dependence. Our final value differs from previous calculations by up to {approx_equal}11 ppm for the 1s state, while a recent experiment aims at a 4 ppm accuracy.

  1. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  2. A beam hardening correction method based on HL consistency

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Tang, Shaojie; Yu, Hengyong

    2006-08-01

    XCT with polychromatic tube spectrum causes artifact called beam hardening effect. The current correction in CT device is carried by apriori polynomial from water phantom experiment. This paper proposes a new beam hardening correction algorithm that the correction polynomial depends on the relativity of projection data in angles, which obeys Helgasson-Ludwig Consistency (HL Consistency). Firstly, a bi-polynomial is constructed to characterize the beam hardening effect based on the physical model of medical x-ray imaging. In this bi-polynomial, a factor r(γ,β) represents the ratio of the attenuation contributions caused by high density mass (bone, etc.) to low density mass (muscle, vessel, blood, soft tissue, fat, etc.) respectively in the projection angle β and fan angle γ. Secondly, let r(γ,β)=0, the bi-polynomial is degraded as a sole-polynomial. The coefficient of this polynomial can be calculated based on HL Consistency. Then, the primary correction is reached, which is also more efficient in theoretical than the correction method in current CT devices. Thirdly, based on the result of a normal CT reconstruction from the corrected projection data, r(γ,β) can be estimated. Fourthly, the coefficient of bi-polynomial can also be calculated based HL Consistency and the final correction are achieved. Experiments of circular cone beam CT indicate this method an excellent property. Correcting beam hardening effect based on HL Consistency, not only achieving a self-adaptive and more precise correction, but also getting rid of regular inconvenient water phantom experiments, will renovate the correction technique of current CT devices.

  3. Improving Earthquake-Explosion Discrimination using Attenuation Models of the Crust and Upper Mantle

    SciTech Connect

    Pasyanos, M E; Walter, W R; Matzel, E M; Rodgers, A J; Ford, S R; Gok, R; Sweeney, J J

    2009-07-06

    In the past year, we have made significant progress on developing and calibrating methodologies to improve earthquake-explosion discrimination using high-frequency regional P/S amplitude ratios. Closely-spaced earthquakes and explosions generally discriminate easily using this method, as demonstrated by recordings of explosions from test sites around the world. In relatively simple geophysical regions such as the continental parts of the Yellow Sea and Korean Peninsula (YSKP) we have successfully used a 1-D Magnitude and Distance Amplitude Correction methodology (1-D MDAC) to extend the regional P/S technique over large areas. However in tectonically complex regions such as the Middle East, or the mixed oceanic-continental paths for the YSKP the lateral variations in amplitudes are not well predicted by 1-D corrections and 1-D MDAC P/S discrimination over broad areas can perform poorly. We have developed a new technique to map 2-D attenuation structure in the crust and upper mantle. We retain the MDAC source model and geometrical spreading formulation and use the amplitudes of the four primary regional phases (Pn, Pg, Sn, Lg), to develop a simultaneous multi-phase approach to determine the P-wave and S-wave attenuation of the lithosphere. The methodology allows solving for attenuation structure in different depth layers. Here we show results for the P and S-wave attenuation in crust and upper mantle layers. When applied to the Middle East, we find variations in the attenuation quality factor Q that are consistent with the complex tectonics of the region. For example, provinces along the tectonically-active Tethys collision zone (e.g. Turkish Plateau, Zagros) have high attenuation in both the crust and upper mantle, while the stable outlying regions like the Indian Shield generally have low attenuation. In the Arabian Shield, however, we find that the low attenuation in this Precambrian crust is underlain by a high-attenuation upper mantle similar to the nearby Red

  4. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  5. LONG TERM MONITORING FOR NATURAL ATTENUATION

    EPA Science Inventory

    We have good statistical methods to: (1) determine whether concentrations of a contaminant are attenuating over time, (2) determine the rate of attenuation and confidence interval on the rate, and (3) determine whether concentrations have met a particular clean up goal. We do no...

  6. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  7. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  8. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  9. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  10. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  11. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  12. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  13. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  14. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  15. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  16. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  17. SU-E-T-233: Modeling Linac Couch Effects On Attenuation and Skin Dose

    SciTech Connect

    Xiong, L; Halvorsen, P

    2014-06-01

    Purpose: Treatment couch tops in medical LINAC rooms lead to attenuation to beams penetrating them, plus higher skin dose which can become a significant concern with the high fraction doses associated with Stereotactic Body Radiation Therapy. This work measures the attenuation and shallow depth dose due to a BrainLab couch, and studies the modeling of the couch top in our treatment planning system (TPS) as a uniform solid material with homogeneous density. Methods: LINAC photon beams of size 10×10 cm and nominal energy 6 MV were irradiated from different gantry angles on a stack of solid water. Depth dose were measured with two types of parallel plate chambers, MPPK and Markus. In the Philips Pinnacle TPS, the couch was modeled as a slab with varying thickness and density. A digital phantom of size 30×30×10 cm with density 1 g/cc was created to simulate the measurement setup. Both the attenuation and skin dose effects due to the couch were studied. Results: An orthogonal attenuation rate of 3.2% was observed with both chamber measurements. The attenuation can be modeled by couch models of varying thicknesses. Once the orthogonal attenuation was modeled well, the oblique beam attenuation in TPS agreed with measurement within 1.5%. The depth dose at shallow depth (0.5 cm) was also shown to be modeled correctly within 1.5% of the measurement using a 12 mm thick couch model with density of 0.9 g/cc. Agreement between calculation and measurement diverges at very shallow depths (≤1 mm) but remains acceptable (<5%) with the aforementioned couch model parameters. Conclusion: Modeling the couch top as a uniform solid in a treatment planning system can predict both the attenuation and surface dose simultaneously well within clinical tolerance in the same model.

  18. Real-time intraoperative fluorescence imaging system using light-absorption correction

    NASA Astrophysics Data System (ADS)

    Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis

    2009-11-01

    We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.

  19. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  20. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  1. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  2. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  3. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  4. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  5. Coping with Misinformation: Corrections, Backfire Effects, and Choice Architectures

    NASA Astrophysics Data System (ADS)

    Lewandowsky, S.; Cook, J.; Ecker, U. K.

    2012-12-01

    The widespread prevalence and persistence of misinformation about many important scientific issues, from climate change to vaccinations or the link between HIV and AIDS, must give rise to concern. We first review the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. We then survey and explain the cognitive factors that often render misinformation resistant to correction. We answer the question why retractions of misinformation can be so ineffective and why they can even backfire and ironically increase misbelief. We discuss the overriding role of ideology and personal worldviews in the resistance of misinformation to correction and show how their role can be attenuated. We discuss the risks associated with repeating misinformation while seeking to correct it and we point to the design of "choice architectures" as an alternative to the attempt to retract misinformation.

  6. Attenuated Vector Tomography -- An Approach to Image Flow Vector Fields with Doppler Ultrasonic Imaging

    SciTech Connect

    Huang, Qiu; Peng, Qiyu; Huang, Bin; Cheryauka, Arvi; Gullberg, Grant T.

    2008-05-15

    The measurement of flow obtained using continuous wave Doppler ultrasound is formulated as a directional projection of a flow vector field. When a continuous ultrasound wave bounces against a flowing particle, a signal is backscattered. This signal obtains a Doppler frequency shift proportional to the speed of the particle along the ultrasound beam. This occurs for each particle along the beam, giving rise to a Doppler velocity spectrum. The first moment of the spectrum provides the directional projection of the flow along theultrasound beam. Signals reflected from points further away from the detector will have lower amplitude than signals reflected from points closer to the detector. The effect is very much akin to that modeled by the attenuated Radon transform in emission computed tomography.A least-squares method was adopted to reconstruct a 2D vector field from directional projection measurements. Attenuated projections of only the longitudinal projections of the vector field were simulated. The components of the vector field were reconstructed using the gradient algorithm to minimize a least-squares criterion. This result was compared with the reconstruction of longitudinal projections of the vector field without attenuation. Ifattenuation is known, the algorithm was able to accurately reconstruct both components of the full vector field from only one set of directional projection measurements. A better reconstruction was obtained with attenuation than without attenuation implying that attenuation provides important information for the reconstruction of flow vector fields.This confirms previous work where we showed that knowledge of the attenuation distribution helps in the reconstruction of MRI diffusion tensor fields from fewer than the required measurements. In the application of ultrasound the attenuation distribution is obtained with pulse wave transmission computed tomography and flow information is obtained with continuous wave Doppler.

  7. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  8. Onboard image correction

    NASA Technical Reports Server (NTRS)

    Martin, D. R.; Smaulon, A. S.; Hamori, A. S.

    1980-01-01

    A processor architecture for performing onboard geometric and radiometric correction of LANDSAT imagery is described. The design uses a general purpose processor to calculate the distortion values at selected points in the image and a special purpose processor to resample (calculate distortion at each image point and interpolate the intensity) the sensor output data. A distinct special purpose processor is used for each spectral band. Because of the sensor's high output data rate, 80 M bit per second, the special purpose processors use a pipeline architecture. Sizing has been done on both the general and special purpose hardware.

  9. Fisher Transformations for Correlations Corrected for Selection and Missing Data.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.

    1993-01-01

    A Fisher's Z transformation is developed for the corrected correlation for conditions when the criterion data are missing because of selection on the predictor and when the criterion was missing at random, not because of selection. The two Z transformations were evaluated in a computer simulation and found accurate. (SLD)

  10. Can corrective feedback improve recognition memory?

    PubMed

    Kantner, Justin; Lindsay, D Stephen

    2010-06-01

    An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.

  11. Comparison of scatter correction methods for CBCT

    NASA Astrophysics Data System (ADS)

    Suri, Roland E.; Virshup, Gary; Zurkirchen, Luis; Kaissl, Wolfgang

    2006-03-01

    In contrast to the narrow fan of clinical Computed Tomography (CT) scanners, Cone Beam scanners irradiate a much larger proportion of the object, which causes additional X-ray scattering. The most obvious scatter artefact is that the middle area of the object becomes darker than the outer area, as the density in the middle of the object is underestimated (cupping). Methods for estimating scatter were investigated that can be applied to each single projection without requiring a preliminary reconstruction. Scatter reduction by the Uniform Scatter Fraction method was implemented in the Varian CBCT software version 2.0. This scatter correction method is recommended for full fan scans using air norm. However, this method did not sufficiently correct artefacts in half fan scans and was not sufficiently robust if used in combination with a Single Norm. Therefore, a physical scatter model was developed that estimates scatter for each projection using the attenuation profile of the object. This model relied on laboratory experiments in which scatter kernels were measured for Plexiglas plates of varying thicknesses. Preliminary results suggest that this kernel model may solve the shortcomings of the Uniform Scatter Fraction model.

  12. Forward- vs. Inverse Problems in Modeling Seismic Attenuation

    NASA Astrophysics Data System (ADS)

    Morozov, I. B.

    2015-12-01

    Seismic attenuation is an important property of wave propagation used in numerous applications. However, the attenuation is also a complex phenomenon, and it is important to differentiate between its two typical uses: 1) in forward problems, to model the amplitudes and spectral contents of waves required for hazard assessment and geotechnical engineering, and 2) in inverse problems, to determine the physical properties of the subsurface. In the forward-problem sense, the attenuation is successfully characterized in terms of empirical parameters of geometric spreading, radiation patterns, scattering amplitudes, t-star, alpha, kappa, or Q. Arguably, the predicted energy losses can be correct even if the underlying attenuation model is phenomenological and not sufficiently based on physics. An example of such phenomenological model is the viscoelasticity based on the correspondence principle and the Q-factor assigned to the material. By contrast, when used to invert for in situ material properties, models addressing the specific physics are required. In many studies (including in this session), a Q-factor is interpreted as a property of a point within the subsurface; however this property is only phenomenological and may be physically insufficient or inconsistent. For example, the bulk or shear Q at the same point can be different when evaluated from different wave modes. The cases of frequency-dependent Q are particularly prone of ambiguities such as trade-off with the assumed background geometric spreading. To rigorously characterize the in situ material properties responsible for seismic-wave attenuation, it is insufficient to only focus on the seismic energy loss. Mechanical models of the material need to be considered. Such models can be constructed by using Lagrangian mechanics. These models should likely contain no Q but will be based on parameters of microstructure such as heterogeneity, fractures, or fluids. I illustrate several such models based on viscosity

  13. Adaptive data rate control TDMA systems as a rain attenuation compensation technique

    NASA Technical Reports Server (NTRS)

    Sato, Masaki; Wakana, Hiromitsu; Takahashi, Takashi; Takeuchi, Makoto; Yamamoto, Minoru

    1993-01-01

    Rainfall attenuation has a severe effect on signal strength and impairs communication links for future mobile and personal satellite communications using Ka-band and millimeter wave frequencies. As rain attenuation compensation techniques, several methods such as uplink power control, site diversity, and adaptive control of data rate or forward error correction have been proposed. In this paper, we propose a TDMA system that can compensate rain attenuation by adaptive control of transmission rates. To evaluate the performance of this TDMA terminal, we carried out three types of experiments: experiments using a Japanese CS-3 satellite with Ka-band transponders, in house IF loop-back experiments, and computer simulations. Experimental results show that this TDMA system has advantages over the conventional constant-rate TDMA systems, as resource sharing technique, in both bit error rate and total TDMA burst lengths required for transmitting given information.

  14. Smooth eigenvalue correction

    NASA Astrophysics Data System (ADS)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2013-12-01

    Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.

  15. Complications of auricular correction

    PubMed Central

    Staindl, Otto; Siedek, Vanessa

    2008-01-01

    The risk of complications of auricular correction is underestimated. There is around a 5% risk of early complications (haematoma, infection, fistulae caused by stitches and granulomae, allergic reactions, pressure ulcers, feelings of pain and asymmetry in side comparison) and a 20% risk of late complications (recurrences, telehone ear, excessive edge formation, auricle fitting too closely, narrowing of the auditory canal, keloids and complete collapse of the ear). Deformities are evaluated less critically by patients than by the surgeons, providing they do not concern how the ear is positioned. The causes of complications and deformities are, in the vast majority of cases, incorrect diagnosis and wrong choice of operating procedure. The choice of operating procedure must be adapted to suit the individual ear morphology. Bandaging technique and inspections and, if necessary, early revision are of great importance for the occurence and progress of early complications, in addition to operation techniques. In cases of late complications such as keloids and auricles that are too closely fitting, unfixed full-thickness skin flaps have proved to be the most successful. Large deformities can often only be corrected to a limited degree of satisfaction. PMID:22073079

  16. Complications of auricular correction.

    PubMed

    Staindl, Otto; Siedek, Vanessa

    2007-01-01

    The risk of complications of auricular correction is underestimated. There is around a 5% risk of early complications (haematoma, infection, fistulae caused by stitches and granulomae, allergic reactions, pressure ulcers, feelings of pain and asymmetry in side comparison) and a 20% risk of late complications (recurrences, telehone ear, excessive edge formation, auricle fitting too closely, narrowing of the auditory canal, keloids and complete collapse of the ear). Deformities are evaluated less critically by patients than by the surgeons, providing they do not concern how the ear is positioned. The causes of complications and deformities are, in the vast majority of cases, incorrect diagnosis and wrong choice of operating procedure. The choice of operating procedure must be adapted to suit the individual ear morphology. Bandaging technique and inspections and, if necessary, early revision are of great importance for the occurence and progress of early complications, in addition to operation techniques. In cases of late complications such as keloids and auricles that are too closely fitting, unfixed full-thickness skin flaps have proved to be the most successful. Large deformities can often only be corrected to a limited degree of satisfaction. PMID:22073079

  17. Live attenuated intranasal influenza vaccine.

    PubMed

    Esposito, Susanna; Montinaro, Valentina; Groppali, Elena; Tenconi, Rossana; Semino, Margherita; Principi, Nicola

    2012-01-01

    Annual vaccination is the most effective means of preventing and controlling influenza epidemics, and the traditional trivalent inactivated vaccine (TIV) is by far the most widely used. Unfortunately, it has a number of limitations, the most important of which is its poor immunogenicity in younger children and the elderly, the populations at greatest risk of severe influenza. Live attenuated influenza vaccine (LAIV) has characteristics that can overcome some of these limitations. It does not have to be injected because it is administered intranasally. It is very effective in children and adolescents, among whom it prevents significantly more cases of influenza than the traditional TIV. However, its efficacy in adults has not been adequately documented, which is why it has not been licensed for use by adults by the European health authorities. LAIV is safe and well tolerated by children aged > 2 y and adults, but some concerns arisen regarding its safety in younger children and subjects with previous asthma or with recurrent wheezing. Further studies are needed to solve these problems and to evaluate the possible role of LAIV in the annual vaccination of the general population.

  18. Attenuation of diacylglycerol second messengers

    SciTech Connect

    Bishop, W.R.; Ganong, B.R.; Bell, R.M.

    1986-05-01

    Diacylglycerol(DAG) derived from phosphatidylinositol activates protein kinase C in agonist-stimulated cells. At least two pathways may contribute to the attenuation of the DAG signal: (1) phosphorylation to phosphatidic acid(PA) by DAG kinase(DGK), and (2) deacylation by DAG and monoacylglycerol lipases. A number of DAG analogs were tested as substrates and inhibitors of partially purified pig brain DGK. Two analogs were potent inhibitors in vitro, 1-monooleoylglycerol(MOG,K/sub I/ = 91 ..mu..M) and diotanoylethyleneglycol (diC/sub 8/EG, K/sub I/ = 58 ..mu..M). These compounds were tested in human platelets. DiC/sub 8/EG inhibited (70 - 100%) (/sup 32/P/sub i/) incorporation into PA in thrombin-stimulated platelets. Under these conditions the DAG signal was somewhat long-lived but was still metabolized, presumably by the lipase pathway. MOG treatment elevated DAG levels up to 4-fold in unstimulated platelets. The DAG formed was in a pool where it did not activate protein kinase C. Thrombin-stimulation of MOG-treated platelets resulted in DAG levels 10-fold higher than control platelets. This appears to be due to the inability of these platelets to metabolize agonist-linked DAG via the lipase