Science.gov

Sample records for accurate attenuation correction

  1. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    PubMed Central

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These

  2. Range Restriction and Attenuation Corrections.

    ERIC Educational Resources Information Center

    Mumford, Michael D.; Mendoza, Jorge L.

    The present paper reviews the techniques commonly used to correct an observed correlation coefficient for the simultaneous influence of attenuation and range restriction effects. It is noted that the procedure which is currently in use may be somewhat biased because it treats range restriction and attenuation as independent restrictive influences.…

  3. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  4. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  5. Monte Carlo-based down-scatter correction of SPECT attenuation maps.

    PubMed

    Bokulić, Tomislav; Vastenhouw, Brendan; de Jong, Hugo W A M; van Dongen, Alice J; van Rijk, Peter P; Beekman, Freek J

    2004-08-01

    Combined acquisition of transmission and emission data in single-photon emission computed tomography (SPECT) can be used for correction of non-uniform photon attenuation. However, down-scatter from a higher energy isotope (e.g. 99mTc) contaminates lower energy transmission data (e.g. 153Gd, 100 keV), resulting in underestimation of reconstructed attenuation coefficients. Window-based corrections are often not very accurate and increase noise in attenuation maps. We have developed a new correction scheme. It uses accurate scatter modelling to avoid noise amplification and does not require additional energy windows. The correction works as follows: Initially, an approximate attenuation map is reconstructed using down-scatter contaminated transmission data (step 1). An emission map is reconstructed based on the contaminated attenuation map (step 2). Based on this approximate 99mTc reconstruction and attenuation map, down-scatter in the 153Gd window is simulated using accelerated Monte Carlo simulation (step 3). This down-scatter estimate is used during reconstruction of a corrected attenuation map (step 4). Based on the corrected attenuation map, an improved 99mTc image is reconstructed (step 5). Steps 3-5 are repeated to incrementally improve the down-scatter estimate. The Monte Carlo simulator provides accurate down-scatter estimation with significantly less noise than down-scatter estimates acquired in an additional window. Errors in the reconstructed attenuation coefficients are reduced from ca. 40% to less than 5%. Furthermore, artefacts in 99mTc emission reconstructions are almost completely removed. These results are better than for window-based correction, both in simulation experiments and in physical phantom experiments. Monte Carlo down-scatter simulation in concert with statistical reconstruction provides accurate down-scatter correction of attenuation maps. PMID:15034678

  6. Onboard Autonomous Corrections for Accurate IRF Pointing.

    NASA Astrophysics Data System (ADS)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.

  7. An MRI-based attenuation correction method for combined PET/MRI applications

    NASA Astrophysics Data System (ADS)

    Fei, Baowei; Yang, Xiaofeng; Wang, Hesheng

    2009-02-01

    We are developing MRI-based attenuation correction methods for PET images. PET has high sensitivity but relatively low resolution and little anatomic details. MRI can provide excellent anatomical structures with high resolution and high soft tissue contrast. MRI can be used to delineate tumor boundaries and to provide an anatomic reference for PET, thereby improving quantitation of PET data. Combined PET/MRI can offer metabolic, functional and anatomic information and thus can provide a powerful tool to study the mechanism of a variety of diseases. Accurate attenuation correction represents an essential component for the reconstruction of artifact-free, quantitative PET images. Unfortunately, the present design of hybrid PET/MRI does not offer measured attenuation correction using a transmission scan. This problem may be solved by deriving attenuation maps from corresponding anatomic MR images. Our approach combines image registration, classification, and attenuation correction in a single scheme. MR images and the preliminary reconstruction of PET data are first registered using our automatic registration method. MRI images are then classified into different tissue types using our multiscale fuzzy C-mean classification method. The voxels of classified tissue types are assigned theoretical tissue-dependent attenuation coefficients to generate attenuation correction factors. Corrected PET emission data are then reconstructed using a threedimensional filtered back projection method and an order subset expectation maximization method. Results from simulated images and phantom data demonstrated that our attenuation correction method can improve PET data quantitation and it can be particularly useful for combined PET/MRI applications.

  8. Hybrid approach for attenuation correction in PET/MR scanners

    NASA Astrophysics Data System (ADS)

    Santos Ribeiro, A.; Rota Kops, E.; Herzog, H.; Almeida, P.

    2014-01-01

    Aim: Attenuation correction (AC) of PET images is still one of the major limitations of hybrid PET/MR scanners. Different methods have been proposed to obtain the AC map from morphological MR images. Although, segmentation methods normally fail to differentiate air and bone regions, while template or atlas methods usually cannot accurately represent regions anatomically different from the template image. In this study a feed forward neural network (FFNN) algorithm is presented which directly outputs the attenuation coefficients by non-linear regression of the images acquired with an ultrashort echo time (UTE) sequence guided by the template-based AC map (TAC-map). Materials and methods: MR as well as CT data were acquired in four subjects. The UTE images and the TAC-map were the inputs of the presented FFNN algorithm for training as well as classification. The resulting attenuation maps were compared with CT-based, PNN-based and TAC maps. All the AC maps were used to reconstruct the PET emission data which were then compared for the different methods. Results: For each subject dice coefficients D were calculated between each method and the respective CT-based AC maps. The resulting Ds show higher values for all FFNN-based tissues comparatively to both TAC-based and PNN-based methods, particularly for bone tissue (D=0.77, D=0.51 and D=0.71, respectively). The AC-corrected PET images with the FFNN-based map show an overall lower relative difference (RD=3.90%) than those AC-corrected with the PNN-based (RD=4.44%) or template-based (RD=4.43%) methods. Conclusion: Our results show that an enhancement of current methods can be performed by combining both information of new MR image sequence techniques and general information provided from template techniques. Nevertheless, the number of tested subjects is statistically low and current analysis for a larger dataset is being carried out.

  9. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    SciTech Connect

    Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan

    2015-09-15

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  10. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  11. CT-based attenuation and scatter correction compared with uniform attenuation correction in brain perfusion SPECT imaging for dementia

    NASA Astrophysics Data System (ADS)

    Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.

    2015-09-01

    This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.

  12. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging.

    PubMed

    Eldib, Mootaz; Bini, Jason; Robson, Philip M; Calcagno, Claudia; Faul, David D; Tsoumpas, Charalampos; Fayad, Zahi A

    2015-06-21

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use. PMID:26020273

  13. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.

    2015-06-01

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  14. Impact of MR based attenuation correction on neurological PET studies

    PubMed Central

    Su, Yi; Rubin, Brian B.; McConathy, Jonathan; Laforest, Richard; Qi, Jing; Sharma, Akash; Priatna, Agus; Benzinger, Tammie L.S.

    2016-01-01

    Hybrid positron emission tomography (PET) and magnetic resonance (MR) scanners have become a reality in recent years with the benefits of reduced radiation exposure, reduction of imaging time, and potential advantages in quantification. Appropriate attenuation correction remains a challenge. Biases in PET activity measurements were demonstrated using the current MR based attenuation correction technique. We aim to investigate the impact of using standard MRAC technique on the clinical and research utility of PET/MR hybrid scanner for amyloid imaging. Methods Florbetapir scans were obtained on 40 participants on a Biograph mMR hybrid scanner with simultaneous MR acquisition. PET images were reconstructed using both MR and CT derived attenuation map. Quantitative analysis was performed for both datasets to assess the impact of MR based attenuation correction to absolute PET activity measurements as well as target to reference ratio (SUVR). Clinical assessment was also performed by a nuclear medicine physician to determine amyloid status based on the criteria in the FDA prescribing information for florbetapir. Results MR based attenuation correction led to underestimation of PET activity for most part of the brain with a small overestimation for deep brain regions. There is also an overestimation of SUVR values with cerebellar reference. SUVR measurements obtained from the two attenuation correction methods were strongly correlated. Clinical assessment of amyloid status resulted in identical classification as positive or negative regardless of the attenuation correction methods. Conclusions MR based attenuation correction cause biases in quantitative measurements. The biases may be accounted for by a linear model, although the spatial variation cannot be easily modelled. The quantitative differences however did not affect clinical assessment as positive or negative. PMID:26823562

  15. Is non-attenuation-corrected PET inferior to body attenuation-corrected PET or PET/CT in lung cancer?

    NASA Astrophysics Data System (ADS)

    Maintas, Dimitris; Houzard, Claire; Ksyar, Rachid; Mognetti, Thomas; Maintas, Catherine; Scheiber, Christian; Itti, Roland

    2006-12-01

    It is considered that one of the great strengths of PET imaging is the ability to correct for body attenuation. This enables better lesion uptake quantification and quality of PET images. The aim of this work is to compare the sensitivity of non-attenuation-corrected (NAC) PET images, the gamma photons (GPAC) and CT attenuation-corrected (CTAC) images in detecting and staging of lung cancer. We have studied 66 patients undergoing PET/CT examinations for detecting and staging NSC lung cancer. The patients were injected with 18-FDG; 5 MBq/kg under fasting conditions and examination was started 60 min later. Transmission data were acquired by a spiral CT X-ray tube and by gamma photons emitting Cs-137l source and were used for the patient body attenuation correction without correction for respiratory motion. In 55 of 66 patients we performed both attenuation correction procedures and in 11 patients only CT attenuation correction. In seven patients with solitary nodules PET was negative and in 59 patients with lung cancer PET/CT was positive for pulmonary or other localization. In the group of 55 patients we found 165 areas of focal increased 18-FDG uptake in NAC, 165 in CTAC and 164 in GPAC PET images.In the patients with only CTAC we found 58 areas of increased 18-FDG uptake on NAC and 58 areas lesions on CTAC. In the patients with positive PET we found 223 areas of focal increased uptake in NAC and 223 areas in CTAC images. The sensitivity of NAC was equal to the sensitivity of CTAC and GPAC images. The visualization of peripheral lesions was better in NAC images and the lesions were better localized in attenuation-corrected images. In three lesions of the thorax the localization was better in GPAC and fused images than in CTAC images.

  16. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  17. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  18. Attenuation correction for the large non-human primate brain imaging using microPET

    NASA Astrophysics Data System (ADS)

    Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.

    2010-04-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  19. Variational attenuation correction in two-view confocal microscopy

    PubMed Central

    2013-01-01

    Background Absorption and refraction induced signal attenuation can seriously hinder the extraction of quantitative information from confocal microscopic data. This signal attenuation can be estimated and corrected by algorithms that use physical image formation models. Especially in thick heterogeneous samples, current single view based models are unable to solve the underdetermined problem of estimating the attenuation-free intensities. Results We present a variational approach to estimate both, the real intensities and the spatially variant attenuation from two views of the same sample from opposite sides. Assuming noise-free measurements throughout the whole volume and pure absorption, this would in theory allow a perfect reconstruction without further assumptions. To cope with real world data, our approach respects photon noise, estimates apparent bleaching between the two recordings, and constrains the attenuation field to be smooth and sparse to avoid spurious attenuation estimates in regions lacking valid measurements. Conclusions We quantify the reconstruction quality on simulated data and compare it to the state-of-the art two-view approach and commonly used one-factor-per-slice approaches like the exponential decay model. Additionally we show its real-world applicability on model organisms from zoology (zebrafish) and botany (Arabidopsis). The results from these experiments show that the proposed approach improves the quantification of confocal microscopic data of thick specimen. PMID:24350574

  20. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial

  1. Dual energy CT for attenuation correction with PET/CT

    SciTech Connect

    Xia, Ting; Alessio, Adam M.; Kinahan, Paul E.

    2014-01-15

    Purpose: The authors evaluate the energy dependent noise and bias properties of monoenergetic images synthesized from dual-energy CT (DECT) acquisitions. These monoenergetic images can be used to estimate attenuation coefficients at energies suitable for positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging. This is becoming more relevant with the increased use of quantitative imaging by PET/CT and SPECT/CT scanners. There are, however, potential variations in the noise and bias of synthesized monoenergetic images as a function of energy. Methods: The authors used analytic approximations and simulations to estimate the noise and bias of synthesized monoenergetic images of water-filled cylinders with different shapes and the NURBS-based cardiac-torso (NCAT) phantom from 40 to 520 keV, the range of SPECT and PET energies. The dual-kVp spectra were based on the GE Lightspeed VCT scanner at 80 and 140 kVp with added filtration of 0.5 mm Cu. The authors evaluated strategies of noise suppression with sinogram smoothing and dose minimization with reduction of tube currents at the two kVp settings. The authors compared the impact of DECT-based attenuation correction with single-kVp CT-based attenuation correction on PET quantitation for the NCAT phantom for soft tissue and high-Z materials of bone and iodine contrast enhancement. Results: Both analytic calculations and simulations displayed the expected minimum noise value for a synthesized monoenergetic image at an energy between the mean energies of the two spectra. In addition the authors found that the normalized coefficient of variation in the synthesized attenuation map increased with energy but reached a plateau near 160 keV, and then remained constant with increasing energy up to 511 keV and beyond. The bias was minimal, as the linear attenuation coefficients of the synthesized monoenergetic images were within 2.4% of the known true values across the entire energy range

  2. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    NASA Astrophysics Data System (ADS)

    Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.

    2008-10-01

    revealed highly inhomogeneous vasculature perfusion within the tumour. Optical-ECT emission images yielded high-resolution 3D images of the fluorescent protein distribution in the tumour. Attenuation-uncorrected optical-ECT images showed clear loss of signal in regions of high attenuation, including regions of high perfusion, where attenuation is increased by increased vascular ink stain. Application of attenuation correction showed significant changes in an apparent expression of fluorescent proteins, confirming the importance of the attenuation correction. In conclusion, this work presents the first development and application of an attenuation correction for optical-ECT imaging. The results suggest that successful attenuation correction for optical-ECT is feasible and is essential for quantitatively accurate optical-ECT imaging.

  3. Effects of attenuation map accuracy on attenuation-corrected micro-SPECT images

    PubMed Central

    2013-01-01

    Background In single-photon emission computed tomography (SPECT), attenuation of photon flux in tissue affects quantitative accuracy of reconstructed images. Attenuation maps derived from X-ray computed tomography (CT) can be employed for attenuation correction. The attenuation coefficients as well as registration accuracy between SPECT and CT can be influenced by several factors. Here we investigate how such inaccuracies influence micro-SPECT quantification. Methods Effects of (1) misalignments between micro-SPECT and micro-CT through shifts and rotation, (2) globally altered attenuation coefficients and (3) combinations of these were evaluated. Tests were performed with a NEMA NU 4–2008 phantom and with rat cadavers containing sources with known activity. Results Changes in measured activities within volumes of interest in phantom images ranged from <1.5% (125I) and <0.6% (201Tl, 99mTc and 111In) for 1-mm shifts to <4.5% (125I) and <1.7% (201Tl, 99mTc and 111In) with large misregistration (3 mm). Changes induced by 15° rotation were smaller than those by 3-mm shifts. By significantly altering attenuation coefficients (±10%), activity changes of <5.2% for 125I and <2.7% for 201Tl, 99mTc and 111In were induced. Similar trends were seen in rat studies. Conclusions While getting sufficient accuracy of attenuation maps in clinical imaging is highly challenging, our results indicate that micro-SPECT quantification is quite robust to various imperfections of attenuation maps. PMID:23369630

  4. Cardiac function assessed by attenuation-corrected radionuclide pressure-volume indices

    SciTech Connect

    Maurer, A.H.; Siegel, J.A.; Blasius, K.M.; Deneberg, B.S.; Spann, J.F.; Malmud, L.S.

    1985-07-01

    Using attenuation-corrected radionuclide volumes and arm-cuff peak systolic pressures, the authors established the mean value for the ratio of left ventricular (LV) peak systolic pressure/end systolic volume at rest for 15 healthy persons. In 43 patients with coronary disease, this ratio was more sensitive as an indicator of abnormal LV function and for predicting coronary artery disease than the resting ejection fraction. The slope of an end systolic pressure-volume line was also calculated from data obtained under three loading conditions: at rest, during isometric handgrip testing, and after the sublingual administration of nitroglycerin. These results represent an improvement over previous radionuclide pressure-volume measurements that have not used attenuation correction and show the need for accurate, nongeometric measurements of the LV end systolic volume.

  5. Uniform attenuation correction using the frequency-distance principle

    SciTech Connect

    Zeng, Gengsheng L.

    2007-11-15

    The frequency-distance principle (FDP) is a well-known relationship that relates the distance between the object and the detector to the slope in the two-dimensional Fourier transform of the projection sinogram. This relationship has been previously applied to compensation of the distance dependent collimator blurring in SPECT (single photon emission computed tomography) in the literature. This paper makes an attempt to use the FDP to correct for uniform attenuation in SPECT. Computer simulations reveal that this technique works well for objects consisting of point sources but does not work well for distributed objects.

  6. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    PubMed Central

    2010-01-01

    Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibration approach based on effective radiation attenuation coefficient measurements was used in the analysis. Water and oil were used to construct phantoms to replicate the deformable properties of the breast. Phantoms consisting of measured proportions of water and oil were used to estimate calibration errors without correction, evaluate the thickness correction, and investigate the reproducibility of the various calibration representations under compression thickness variations. Results The average thickness uncertainty due to compression paddle warp was characterized to within 0.5 mm. The relative calibration error was reduced to 7% from 48-68% with the correction. The normalized effective radiation attenuation coefficient (planar) representation was reproducible under intra-sample compression thickness variations compared with calibrated volume measures. Conclusion Incorporating this thickness correction into the rigid breast tissue equivalent calibration method should improve the calibration accuracy of mammograms for risk assessments using the reproducible planar calibration measure. PMID:21080916

  7. MR Imaging-Guided Attenuation Correction of PET Data in PET/MR Imaging.

    PubMed

    Izquierdo-Garcia, David; Catana, Ciprian

    2016-04-01

    Attenuation correction (AC) is one of the most important challenges in the recently introduced combined PET/magnetic resonance (MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients of the tissues and other components located in the PET field of view. MR-AC methods can be divided into 3 categories: segmentation, atlas, and PET based. This review provides a comprehensive list of the state-of-the-art MR-AC approaches and their pros and cons. The main sources of artifacts are presented. Finally, this review discusses the current status of MR-AC approaches for clinical applications. PMID:26952727

  8. Field of view extension and truncation correction for MR-based human attenuation correction in simultaneous MR/PET imaging

    SciTech Connect

    Blumhagen, Jan O. Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus

    2014-02-15

    Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B{sub 0}) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B{sub 0} inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might

  9. Proximity corrected accurate in-die registration metrology

    NASA Astrophysics Data System (ADS)

    Daneshpanah, M.; Laske, F.; Wagner, M.; Roeth, K.-D.; Czerkas, S.; Yamaguchi, H.; Fujii, N.; Yoshikawa, S.; Kanno, K.; Takamizawa, H.

    2014-07-01

    193nm immersion lithography is the mainstream production technology for the 20nm and 14nm logic nodes. Multi-patterning of an increasing number of critical layers puts extreme pressure on wafer intra-field overlay, to which mask registration error is a major contributor [1]. The International Technology Roadmap for Semiconductors (ITRS [2]) requests a registration error below 4 nm for each mask of a multi-patterning set forming one layer on the wafer. For mask metrology at the 20nm and 14nm logic nodes, maintaining a precision-to-tolerance (P/T) ratio below 0.25 will be very challenging. Full characterization of mask registration errors in the active area of the die will become mandatory. It is well-known that differences in pattern density and asymmetries in the immediate neighborhood of a feature give rise to apparent shifts in position when measured by optical metrology systems, so-called optical proximity effects. These effects can easily be similar in magnitude to real mask placement errors, and uncorrected can result in mis-qualification of the mask. Metrology results from KLA-Tencor's next generation mask metrology system are reported, applying a model-based algorithm [3] which includes corrections for proximity errors. The proximity corrected, model-based measurements are compared to standard measurements and a methodology presented that verifies the correction performance of the new algorithm.

  10. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  11. Continuous MR bone density measurement using water- and fat-suppressed projection imaging (WASPI) for PET attenuation correction in PET-MR.

    PubMed

    Huang, C; Ouyang, J; Reese, T G; Wu, Y; El Fakhri, G; Ackerman, J L

    2015-10-21

    Due to the lack of signal from solid bone in normal MR sequences for the purpose of MR-based attenuation correction, investigators have proposed using the ultrashort echo time (UTE) pulse sequence, which yields signal from bone. However, the UTE-based segmentation approach might not fully capture the intra- and inter-subject bone density variation, which will inevitably lead to bias in reconstructed PET images. In this work, we investigated using the water- and fat-suppressed proton projection imaging (WASPI) sequence to obtain accurate and continuous attenuation for bones. This approach is capable of accounting for intra- and inter-subject bone attenuation variations. Using data acquired from a phantom, we have found that that attenuation correction based on the WASPI sequence is more accurate and precise when compared to either conventional MR attenuation correction or UTE-based segmentation approaches. PMID:26405761

  12. Development of attenuation and diffraction corrections for linear and nonlinear Rayleigh surface waves radiating from a uniform line source

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Zhang, Shuzeng; Cho, Sungjong; Li, Xiongbing

    2016-04-01

    In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave is defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.

  13. Using BRDFs for accurate albedo calculations and adjacency effect corrections

    SciTech Connect

    Borel, C.C.; Gerstl, S.A.W.

    1996-09-01

    In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.

  14. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    NASA Astrophysics Data System (ADS)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  15. Attenuation Correction for Magnetic Resonance Coils in Combined PET/MR Imaging: A Review.

    PubMed

    Eldib, Mootaz; Bini, Jason; Faul, David D; Oesingmann, Niels; Tsoumpas, Charalampos; Fayad, Zahi A

    2016-04-01

    With the introduction of clinical PET/magnetic resonance (MR) systems, novel attenuation correction methods are needed, as there are no direct MR methods to measure the attenuation of the objects in the field of view (FOV). A unique challenge for PET/MR attenuation correction is that coils for MR data acquisition are located in the FOV of the PET camera and could induce significant quantitative errors. In this review, current methods and techniques to correct for the attenuation of a variety of coils are summarized and evaluated. PMID:26952728

  16. Ultra low-dose CT attenuation correction in PET SPM

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Jen; Yang, Bang-Hung; Tsai, Chia-Jung; Yang, Ching-Ching; Lee, Jason J. S.; Wu, Tung-Hsin

    2010-07-01

    The use of CT images for attenuation correction (CTAC) allows significantly shorter scanning time and a high quality noise-free attenuation map compared with conventional germanium-68 transmission scan because at least 10 4 times greater of photon flux would be generated from a CT scan under standard operating condition. However, this CTAC technique would potentially introduce more radiation risk to the patients owing to the higher radiation exposure from CT scan. Statistic parameters mapping (SPM) is a prominent technique in nuclear medicine community for the analysis of brain imaging data. The purpose of this study is to assess the feasibility of low-dose CT (LDCT) and ultra low-dose CT (UDCT) in PET SPM applications. The study was divided into two parts. The first part was to evaluate of tracer uptake distribution pattern and quantity analysis by using the striatal phantom to initially assess the feasibility of AC for clinical purpose. The second part was to examine the group SPM analysis using the Hoffman brain phantom. The phantom study is to simulate the human brain and to reduce the experimental uncertainty of real subjects. The initial studies show that the results of PET SPM analysis have no significant differences between LDCT and UDCT comparing to the current used default CTAC. Moreover, the dose of the LDCT is lower than that of the default CT by a factor of 9, and UDCT can even yield a 42 times dose reduction. We have demonstrated the SPM results while using LDCT and UDCT for PET AC is comparable to those using default CT setting, suggesting their feasibility in PET SPM applications. In addition, the necessity of UDCT in PET SPM studies to avoid excess radiation dose is also evident since most of the subjects involved are non-cancer patients or children and some normal subjects are even served as a comparison group in the experiment. It is our belief that additional attempts to decrease the radiation dose would be valuable, especially for children and

  17. Metal artifact reduction strategies for improved attenuation correction in hybrid PET/CT imaging

    SciTech Connect

    Abdoli, Mehrsima; Dierckx, Rudi A. J. O.; Zaidi, Habib

    2012-06-15

    Metallic implants are known to generate bright and dark streaking artifacts in x-ray computed tomography (CT) images, which in turn propagate to corresponding functional positron emission tomography (PET) images during the CT-based attenuation correction procedure commonly used on hybrid clinical PET/CT scanners. Therefore, visual artifacts and overestimation and/or underestimation of the tracer uptake in regions adjacent to metallic implants are likely to occur and as such, inaccurate quantification of the tracer uptake and potential erroneous clinical interpretation of PET images is expected. Accurate quantification of PET data requires metal artifact reduction (MAR) of the CT images prior to the application of the CT-based attenuation correction procedure. In this review, the origins of metallic artifacts and their impact on clinical PET/CT imaging are discussed. Moreover, a brief overview of proposed MAR methods and their advantages and drawbacks is presented. Although most of the presented MAR methods are mainly developed for diagnostic CT imaging, their potential application in PET/CT imaging is highlighted. The challenges associated with comparative evaluation of these methods in a clinical environment in the absence of a gold standard are also discussed.

  18. The study and real-time implementation of attenuation correction for X-band dual-polarization weather radars

    NASA Astrophysics Data System (ADS)

    Liu, Yuxiang

    Attenuation of electromagnetic radiation due to rain or other wet hydrometeors along the propagation path has been studied extensively in the radar meteorology community. Recently, use of short range dual-polarization X-band radar systems has gained momentum due to lower system cost compared with the much more expensive S-band systems. Advances in dual-polarization radar research have shown that the specific attenuation and differential attenuation between horizontal and vertical polarized waves caused by oblate, highly oriented raindrops can be estimated using the specific differential phase. This advance leads to correction of the measured reflectivity (Zh) and the differential reflectivity (Zdr) due to path attenuation. This thesis addresses via theory, simulations and data analyses the accuracy and optimal estimation of attenuation-correction procedures at X-band frequency. Real-time implementation of the correction algorithm was developed for the first generation of X-band dual-polarized Doppler radar network (Integration Project 1, IP1) operated by the NSF Center for Collaborate Adaptive Sensing of the Atmosphere (CASA). We evaluate the algorithm for correcting the Zh, and the Zdr for rain attenuation using simulations and X-band radar data under ideal and noisy situations. Our algorithm is able to adjust the parameters according to the changes in temperature, drop shapes, and a certain class of drop size distributions (DSD) with very fast convergence. The X-band radar data were obtained from the National Institute of Earth Science and Disaster Prevention (NIED), Japan, and from CASA IP1. The algorithm accurately corrects NIED's data when compared with ground truth calculated from in situ disdrometer-based DSD measurements for a Typhoon event. We have implemented, in real-time, the algorithm in all the CASA IP1 radar nodes. We also evaluate our preliminary method that separately estimates rain and wet ice attenuation using microphysical outputs from a

  19. Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR

    PubMed Central

    Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A.; Alpert, Nathaniel; Fakhri, Georges El

    2014-01-01

    This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs. PMID:24966415

  20. Attenuation correction for small animal SPECT imaging using x-ray CT data

    SciTech Connect

    Hwang, Andrew B.; Hasegawa, Bruce H.

    2005-09-15

    Photon attenuation in small animal nuclear medicine scans can be significant when using isotopes that emit lower energy photons such as iodine-125. We have developed a method to use microCT data to perform attenuation corrected small animal single-photon emission computed tomography (SPECT). A microCT calibration phantom was first imaged, and the resulting calibration curve was used to convert microCT image values to linear attenuation coefficient values that were then used in an iterative SPECT reconstruction algorithm. This method was applied to reconstruct a SPECT image of a uniform phantom filled with {sup 125}I-NaI. Without attenuation correction, the image suffered a 30% decrease in intensity in the center of the image, which was removed with the addition of attenuation correction. This reduced the relative standard deviation in the region of interest from 10% to 6%.

  1. Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-07-01

    Quantitative whole-body PET/MR imaging is challenged by the lack of accurate and robust strategies for attenuation correction. In this work, a new pseudo-CT generation approach, referred to as sorted atlas pseudo-CT (SAP), is proposed for accurate extraction of bones and estimation of lung attenuation properties. This approach improves the Gaussian process regression (GPR) kernel proposed by Hofmann et al. which relies on the information provided by a co-registered atlas (CT and MRI) using a GPR kernel to predict the distribution of attenuation coefficients. Our approach uses two separate GPR kernels for lung and non-lung tissues. For non-lung tissues, the co-registered atlas dataset was sorted on the basis of local normalized cross-correlation similarity to the target MR image to select the most similar image in the atlas for each voxel. For lung tissue, the lung volume was incorporated in the GPR kernel taking advantage of the correlation between lung volume and corresponding attenuation properties to predict the attenuation coefficients of the lung. In the presence of pathological tissues in the lungs, the lesions are segmented on PET images corrected for attenuation using MRI-derived three-class attenuation map followed by assignment of soft-tissue attenuation coefficient. The proposed algorithm was compared to other techniques reported in the literature including Hofmann's approach and the three-class attenuation correction technique implemented on the Philips Ingenuity TF PET/MR where CT-based attenuation correction served as reference. Fourteen patients with head and neck cancer undergoing PET/CT and PET/MR examinations were used for quantitative analysis. SUV measurements were performed on 12 normal uptake regions as well as high uptake malignant regions. Moreover, a number of similarity measures were used to evaluate the accuracy of extracted bones. The Dice similarity metric revealed that the extracted bone improved from 0.58 ± 0.09 to 0.65 ± 0.07 when

  2. Investigation of Attenuation Correction for Small-Animal Single Photon Emission Computed Tomography

    PubMed Central

    Lee, Hsin-Hui; Chen, Jyh-Cheng

    2013-01-01

    The quantitative accuracy of SPECT is limited by photon attenuation and scatter effect when photons interact with atoms. In this study, we developed a new attenuation correction (AC) method, CT-based mean attenuation correction (CTMAC) method, and compared it with various methods that were often used currently to assess the AC phenomenon by using the small-animal SPECT/CT data that were acquired from various physical phantoms and a rat. The physical phantoms and an SD rat, which were injected with 99mTc, were scanned by a parallel-hole small-animal SPECT, and then they were imaged by the 80 kVp micro-CT. Scatter was estimated and corrected by the triple-energy window (TEW) method. Absolute quantification was derived from a known activity point source scan. In the physical-phantom studies, we compared the images with original, scatter correction (SC) only, and the scatter-corrected images with AC performed by using Chang's method, CT-based attenuation correction (CTAC), CT-based iterative attenuation compensation during reconstruction (CTIACR), and the CTMAC. From the correction results, we find out that the errors of the previous six configurations are mostly quite similar. The CTMAC needs the shortest correction time while obtaining good AC results. PMID:23840278

  3. Improvement of Attenuation Correction in Time-of-Flight PET/MR Imaging with a Positron-Emitting Source

    PubMed Central

    Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A.; Vandenberghe, Stefaan

    2014-01-01

    Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Methods Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. Results The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. Conclusion In conclusion, a transmission

  4. Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA

    NASA Astrophysics Data System (ADS)

    Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.

    2015-03-01

    In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.

  5. [Preliminary evaluation of the effect of an attenuation correction method in myocardial perfusion SPECT].

    PubMed

    Cortés-Blanco, A; Fujii, C; Goris, M L

    1999-12-01

    We propose a method to assess an attenuation correction method in myocardial perfusion SPECT. Three types of images are obtained: one resulting from a classic acquisition and filtered back-projection (classic), and those resulting from acquisition with a transmission source and an iterative reconstruction, with (music) or without (hybrid) the attenuation correction factored in to compare the three types of images and classify them as normal or abnormal, a three dimensional inter-patient quantitative comparison method was used. Differences were computed as fractions of the myocardial volume in which density differences are significant by population standards. In 7 cases the cumulative difference between prone and supine in hybrid images was 124 and 45 in music images. In 10 cases the cumulative difference between classic vs music images was 279, and between classic and hybrid 86. The AC changed 4/12 cases from abnormal to normal. The attenuation correction effect was concentrated on the septal and inferior walls, but neither exclusively nor evenly among patients. The attenuation correction effectively minimizes attenuation effects by a factor of 2.7, due to a correction of at least 69%. The correction has a small but substantial effect on the results. PMID:10611567

  6. Accurate and efficient modeling of global seismic wave propagation for an attenuative Earth model including the center

    NASA Astrophysics Data System (ADS)

    Toyokuni, Genti; Takenaka, Hiroshi

    2012-06-01

    We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic

  7. Emission-based estimation of lung attenuation coefficients for attenuation correction in time-of-flight PET/MR.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-06-21

    In standard segmentation-based MRI-guided attenuation correction (MRAC) of PET data on hybrid PET/MRI systems, the inter/intra-patient variability of linear attenuation coefficients (LACs) is ignored owing to the assignment of a constant LAC to each tissue class. This can lead to PET quantification errors, especially in the lung regions. In this work, we aim to derive continuous and patient-specific lung LACs from time-of-flight (TOF) PET emission data using the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm. The MLAA algorithm was constrained for estimation of lung LACs only in the standard 4-class MR attenuation map using Gaussian lung tissue preference and Markov random field smoothness priors. MRAC maps were derived from segmentation of CT images of 19 TOF-PET/CT clinical studies into background air, lung, soft tissue and fat tissue classes, followed by assignment of predefined LACs of 0, 0.0224, 0.0864 and 0.0975 cm(-1), respectively. The lung LACs of the resulting attenuation maps were then estimated from emission data using the proposed MLAA algorithm. PET quantification accuracy of MRAC and MLAA methods was evaluated against the reference CT-based AC method in the lungs, lesions located in/near the lungs and neighbouring tissues. The results show that the proposed MLAA algorithm is capable of retrieving lung density gradients and compensate fairly for respiratory-phase mismatch between PET and corresponding attenuation maps. It was found that the mean of the estimated lung LACs generally follow the trend of the reference CT-based attenuation correction (CTAC) method. Quantitative analysis revealed that the MRAC method resulted in average relative errors of -5.2 ± 7.1% and -6.1 ± 6.7% in the lungs and lesions, respectively. These were reduced by the MLAA algorithm to -0.8 ± 6.3% and -3.3 ± 4.7%, respectively. In conclusion, we demonstrated the potential and capability of emission-based methods in deriving patient

  8. Polarimetric X-band weather radar measurements in the tropics: radome and rain attenuation correction

    NASA Astrophysics Data System (ADS)

    Schneebeli, M.; Sakuragi, J.; Biscaro, T.; Angelis, C. F.; Carvalho da Costa, I.; Morales, C.; Baldini, L.; Machado, L. A. T.

    2012-09-01

    A polarimetric X-band radar has been deployed during one month (April 2011) for a field campaign in Fortaleza, Brazil, together with three additional laser disdrometers. The disdrometers are capable of measuring the raindrop size distributions (DSDs), hence making it possible to forward-model theoretical polarimetric X-band radar observables at the point where the instruments are located. This set-up allows to thoroughly test the accuracy of the X-band radar measurements as well as the algorithms that are used to correct the radar data for radome and rain attenuation. For the campaign in Fortaleza it was found that radome attenuation dominantly affects the measurements. With an algorithm that is based on the self-consistency of the polarimetric observables, the radome induced reflectivity offset was estimated. Offset corrected measurements were then further corrected for rain attenuation with two different schemes. The performance of the post-processing steps was analyzed by comparing the data with disdrometer-inferred polarimetric variables that were measured at a distance of 20 km from the radar. Radome attenuation reached values up to 14 dB which was found to be consistent with an empirical radome attenuation vs. rain intensity relation that was previously developed for the same radar type. In contrast to previous work, our results suggest that radome attenuation should be estimated individually for every view direction of the radar in order to obtain homogenous reflectivity fields.

  9. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  10. Accuracy of CT-based attenuation correction in PET/CT bone imaging

    NASA Astrophysics Data System (ADS)

    Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.

    2012-05-01

    We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.

  11. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  12. An improved MR sequence for attenuation correction in PET/MR hybrid imaging.

    PubMed

    Sagiyama, Koji; Watanabe, Yuji; Kamei, Ryotaro; Shinyama, Daiki; Baba, Shingo; Honda, Hiroshi

    2016-04-01

    The aim of this study was to investigate the effects of MR parameters on tissue segmentation and determine the optimal MR sequence for attenuation correction in PET/MR hybrid imaging. Eight healthy volunteers were examined using a PET/MR hybrid scanner with six three-dimensional turbo-field-echo sequences for attenuation correction by modifying the echo time, k-space trajectory in the phase-encoding direction, and image contrast. MR images for attenuation correction were obtained from six MR sequences in each session; each volunteer underwent four sessions. Two radiologists assessed the attenuation correction maps generated from the MR images with respect to segmentation errors and ghost artifacts on a five-point scale, and the scores were decided by consensus. Segmentation accuracy and reproducibility were compared. Multiple regression analysis was performed to determine the effects of each MR parameter. The two three-dimensional turbo-field-echo sequences with an in-phase echo time and radial k-space sampling showed the highest total scores for segmentation accuracy, with a high reproducibility. In multiple regression analysis, the score with the shortest echo time (-3.44, P<0.0001) and Cartesian sampling in the anterior/posterior phase-encoding direction (-2.72, P=0.002) was significantly lower than that with in-phase echo time and Cartesian sampling in the right/left phase-encoding direction. Radial k-space sampling provided a significantly higher score (+5.08, P<0.0001) compared with Cartesian sampling. Furthermore, radial sampling improved intrasubject variations in the segmentation score (-8.28%, P=0.002). Image contrast had no significant effect on the total score or reproducibility. These results suggest that three-dimensional turbo-field-echo MR sequences with an in-phase echo time and radial k-space sampling provide improved MR-based attenuation correction maps. PMID:26656909

  13. Attenuation correction of emission PET images with average CT: Interpolation from breath-hold CT

    NASA Astrophysics Data System (ADS)

    Huang, Tzung-Chi; Zhang, Geoffrey; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Wang, Shyh-Jen; Wu, Tung-Hsin

    2011-05-01

    Misregistration resulting from the difference of temporal resolution in PET and CT scans occur frequently in PET/CT imaging, which causes distortion in tumor quantification in PET. Respiration cine average CT (CACT) for PET attenuation correction has been reported to improve the misalignment effectively by several papers. However, the radiation dose to the patient from a four-dimensional CT scan is relatively high. In this study, we propose a method to interpolate respiratory CT images over a respiratory cycle from inhalation and exhalation breath-hold CT images, and use the average CT from the generated CT set for PET attenuation correction. The radiation dose to the patient is reduced using this method. Six cancer patients of various lesion sites underwent routine free-breath helical CT (HCT), respiration CACT, interpolated average CT (IACT), and 18F-FDG PET. Deformable image registration was used to interpolate the middle phases of a respiratory cycle based on the end-inspiration and end-expiration breath-hold CT scans. The average CT image was calculated from the eight interpolated CT image sets of middle respiratory phases and the two original inspiration and expiration CT images. Then the PET images were reconstructed by these three methods for attenuation correction using HCT, CACT, and IACT. Misalignment of PET image using either CACT or IACT for attenuation correction in PET/CT was improved. The difference in standard uptake value (SUV) from tumor in PET images was most significant between the use of HCT and CACT, while the least significant between the use of CACT and IACT. Besides the similar improvement in tumor quantification compared to the use of CACT, using IACT for PET attenuation correction reduces the radiation dose to the patient.

  14. Intravascular near-infrared fluorescence catheter with ultrasound guidance and blood attenuation correction

    PubMed Central

    Hossack, John A.

    2013-01-01

    Abstract. Intravascular near-infrared fluorescence (NIRF) imaging offers a new approach for characterizing atherosclerotic plaque, but random catheter positioning within the vessel lumen results in variable light attenuation and can yield inaccurate measurements. We hypothesized that NIRF measurements could be corrected for variable light attenuation through blood by tracking the location of the NIRF catheter with intravascular ultrasound (IVUS). In this study, a combined NIRF-IVUS catheter was designed to acquire coregistered NIRF and IVUS data, an automated image processing algorithm was developed to measure catheter-to-vessel wall distances, and depth-dependent attenuation of the fluorescent signal was corrected by an analytical light propagation model. Performance of the catheter sensing distance correction method was evaluated in coronary artery phantoms and ex vivo arteries. The correction method produced NIRF estimates of fluorophore concentrations, in coronary artery phantoms, with an average root mean square error of 17.5%. In addition, the correction method resulted in a statistically significant improvement in correlation between spatially resolved NIRF measurements and known fluorophore spatial distributions in ex vivo arteries (from r=0.24 to 0.69, p<0.01, n=6). This work demonstrates that catheter-to-vessel wall distances, measured from IVUS images, can be employed to compensate for inaccuracies caused by variable intravascular NIRF sensing distances. PMID:23698320

  15. A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies

    PubMed Central

    2014-01-01

    We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys.1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616

  16. A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies.

    PubMed

    Truchon, Jean-François; Pettitt, B Montgomery; Labute, Paul

    2014-03-11

    We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys. 1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616

  17. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    NASA Astrophysics Data System (ADS)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  18. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200

  19. Methods of Attenuation Correction for Dual-Wavelength and Dual-Polarization Weather Radar Data

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.

    2007-01-01

    In writing the integral equations for the median mass diameter and number concentration, or comparable parameters of the raindrop size distribution, it is apparent that the forms of the equations for dual-polarization and dual-wavelength radar data are identical when attenuation effects are included. The differential backscattering and extinction coefficients appear in both sets of equations: for the dual-polarization equations, the differences are taken with respect to polarization at a fixed frequency while for the dual-wavelength equations, the differences are taken with respect to frequency at a fixed polarization. An alternative to the integral equation formulation is that based on the k-Z (attenuation coefficient-radar reflectivity factor) parameterization. This-technique was originally developed for attenuating single-wavelength radars, a variation of which has been applied to the TRMM Precipitation Radar data (PR). Extensions of this method have also been applied to dual-polarization data. In fact, it is not difficult to show that nearly identical equations are applicable as well to dualwavelength radar data. In this case, the equations for median mass diameter and number concentration take the form of coupled, but non-integral equations. Differences between this and the integral equation formulation are a consequence of the different ways in which attenuation correction is performed under the two formulations. For both techniques, the equations can be solved either forward from the radar outward or backward from the final range gate toward the radar. Although the forward-going solutions tend to be unstable as the attenuation out to the range of interest becomes large in some sense, an independent estimate of path attenuation is not required. This is analogous to the case of an attenuating single-wavelength radar where the forward solution to the Hitschfeld-Bordan equation becomes unstable as the attenuation increases. To circumvent this problem, the

  20. An accurate and efficient algorithm for Faraday rotation corrections for spaceborne microwave radiometers

    NASA Astrophysics Data System (ADS)

    Singh, Malkiat; Bettenhausen, Michael H.

    2011-08-01

    Faraday rotation changes the polarization plane of linearly polarized microwaves which propagate through the ionosphere. To correct for ionospheric polarization error, it is necessary to have electron density profiles on a global scale that represent the ionosphere in real time. We use raytrace through the combined models of ionospheric conductivity and electron density (ICED), Bent, and Gallagher models (RIBG model) to specify the ionospheric conditions by ingesting the GPS data from observing stations that are as close as possible to the observation time and location of the space system for which the corrections are required. To accurately calculate Faraday rotation corrections, we also utilize the raytrace utility of the RIBG model instead of the normal shell model assumption for the ionosphere. We use WindSat data, which exhibits a wide range of orientations of the raypath and a high data rate of observations, to provide a realistic data set for analysis. The standard single-shell models at 350 and 400 km are studied along with a new three-shell model and compared with the raytrace method for computation time and accuracy. We have compared the Faraday results obtained with climatological (International Reference Ionosphere and RIBG) and physics-based (Global Assimilation of Ionospheric Measurements) ionospheric models. We also study the impact of limitations in the availability of GPS data on the accuracy of the Faraday rotation calculations.

  1. Quantitative multi-pinhole small-animal SPECT: uniform versus non-uniform Chang attenuation correction

    NASA Astrophysics Data System (ADS)

    Wu, C.; de Jong, J. R.; Gratama van Andel, H. A.; van der Have, F.; Vastenhouw, B.; Laverman, P.; Boerman, O. C.; Dierckx, R. A. J. O.; Beekman, F. J.

    2011-09-01

    Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In 125I, 201Tl, 99mTc and 111In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for 125I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for 125I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.

  2. Low-dose interpolated average CT for attenuation correction in cardiac PET/CT

    NASA Astrophysics Data System (ADS)

    Wu, Tung-Hsin; Zhang, Geoffrey; Wang, Shyh-Jen; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Huang, Tzung-Chi

    2010-07-01

    Because of the advantages in the use of high photon flux and thus the short scan times of CT imaging, the traditional 68Ge scans for positron emission tomography (PET) image attenuation correction have been replaced by CT scans in the modern PET/CT technology. The combination of fast CT scan and slow PET scan often causes image misalignment between the PET and CT images due to respiration motion. Use of the average CT derived from cine CT images is reported to reduce such misalignment. However, the radiation dose to patients is higher with cine CT scans. This study introduces a method that uses breath-hold CT images and their interpolations to generate the average CT for PET image attenuation correction. Breath-hold CT sets are taken at end-inspiration and end-expiration. Deformable image registration is applied to generate a voxel-to-voxel motion matrix between the two CT sets. The motion is equally divided into 5 steps from inspiration to expiration and 5 steps from expiration to inspiration, generating a total of 8 phases of interpolated CT sets. An average CT image is generated from all the 10 phase CT images, including original inhale/exhale CT and 8 interpolated CT sets. Quantitative comparison shows that the reduction of image misalignment artifacts using the average CT from the interpolation technique for PET attenuation correction is at a similar level as that using cine average CT, while the dose to the patient from the CT scans is reduced significantly. The interpolated average CT method hence provides a low dose alternative to cine CT scans for PET attenuation correction.

  3. Attenuation correction with region growing method used in the positron emission mammography imaging system

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  4. Validation of Computed Tomography-based Attenuation Correction of Deviation between Theoretical and Actual Values in Four Computed Tomography Scanners

    PubMed Central

    Yada, Nobuhiro; Onishi, Hideo

    2016-01-01

    Objective(s): In this study, we aimed to validate the accuracy of computed tomography-based attenuation correction (CTAC), using the bilinear scaling method. Methods: The measured attenuation coefficient (μm) was compared to the theoretical attenuation coefficient (μt), using four different CT scanners and an RMI 467 phantom. The effective energy of CT beam X-rays was calculated, using the aluminum half-value layer method and was used in conjunction with an attenuation coefficient map to convert the CT numbers to μm values for the photon energy of 140 keV. We measured the CT number of RMI 467 phantom for each of the four scanners and compared the μm and μt values for the effective energies of CT beam X-rays, effective atomic numbers, and physical densities. Results: The μm values for CT beam X-rays with low effective energies decreased in high construction elements, compared with CT beam X-rays of high effective energies. As the physical density increased, the μm values elevated linearly. Compared with other scanners, the μm values obtained from the scanner with CT beam X-rays of maximal effective energy increased once the effective atomic number exceeded 10.00. The μm value of soft tissue was equivalent to the μt value. However, the ratios of maximal difference between μm and μt values were 25.4% (lung tissue) and 21.5% (bone tissue), respectively. Additionally, the maximal difference in μm values was 6.0% in the bone tissue for each scanner. Conclusion: The bilinear scaling method could accurately convert CT numbers to μ values in soft tissues. PMID:27408896

  5. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  6. Feasibility of using respiration-averaged MR images for attenuation correction of cardiac PET/MR imaging.

    PubMed

    Ai, Hua; Pan, Tinsu

    2015-01-01

    Cardiac imaging is a promising application for combined PET/MR imaging. However, current MR imaging protocols for whole-body attenuation correction can produce spatial mismatch between PET and MR-derived attenuation data owing to a disparity between the two modalities' imaging speeds. We assessed the feasibility of using a respiration-averaged MR (AMR) method for attenuation correction of cardiac PET data in PET/MR images. First, to demonstrate the feasibility of motion imaging with MR, we used a 3T MR system and a two-dimensional fast spoiled gradient-recalled echo (SPGR) sequence to obtain AMR images ofa moving phantom. Then, we used the same sequence to obtain AMR images of a patient's thorax under free-breathing conditions. MR images were converted into PET attenuation maps using a three-class tissue segmentation method with two sets of predetermined CT numbers, one calculated from the patient-specific (PS) CT images and the other from a reference group (RG) containing 54 patient CT datasets. The MR-derived attenuation images were then used for attenuation correction of the cardiac PET data, which were compared to the PET data corrected with average CT (ACT) images. In the myocardium, the voxel-by-voxel differences and the differences in mean slice activity between the AMR-corrected PET data and the ACT-corrected PET data were found to be small (less than 7%). The use of AMR-derived attenuation images in place of ACT images for attenuation correction did not affect the summed stress score. These results demonstrate the feasibility of using the proposed SPGR-based MR imaging protocol to obtain patient AMR images and using those images for cardiac PET attenuation correction. Additional studies with more clinical data are warranted to further evaluate the method. PMID:26218995

  7. Correction for multiple scattering of unpolarized photons in attenuation coefficient measurements

    SciTech Connect

    Fernandez, J.E.; Sumini, M.; Satori, R.

    1995-01-01

    Calculations of the diffusion of unpolarized photons in thin thickness targets have been performed with recourse to a vector transport model taking rigorously into account the polarization introduced by the scattering interactions. An order-of-interactions solution of the Boltzmann transport equation for photons was used to describe the multiple scattering terms due to the prevailing effects in the X-ray regime. An analytical expression for the correction factor to the attenuation coefficient is given in term of the solid angle subtended by the detector and the energy interval characterizing the detection response. Although the main corrections are due to the influence of the pure Rayleigh effect, first- and second-order chains involving the Rayleigh and Compton effects have been considered as possible sources of overlapping contributions to the transmitted intensity. The extent of the corrections is estimated and some examples are given for pure element targets.

  8. Attenuated MP2 with a Long-Range Dispersion Correction for Treating Nonbonded Interactions.

    PubMed

    Goldey, Matthew B; Belzunces, Bastien; Head-Gordon, Martin

    2015-09-01

    Attenuated second order Møller-Plesset theory (MP2) captures intermolecular binding energies at equilibrium geometries with high fidelity with respect to reference methods, yet must fail to reproduce dispersion energies at stretched geometries due to the removal of fully long-range dispersion. For this problem to be ameliorated, long-range correction using the VV10 van der Waals density functional is added to attenuated MP2, capturing short-range correlation with attenuated MP2 and long-range dispersion with VV10. Attenuated MP2 with long-range VV10 dispersion in the aug-cc-pVTZ (aTZ) basis set, MP2-V(terfc, aTZ), is parametrized for noncovalent interactions using the S66 database and tested on a variety of noncovalent databases, describing potential energy surfaces and equilibrium binding energies equally well. Further, a spin-component scaled (SCS) version, SCS-MP2-V(2terfc, aTZ), is produced using the W4-11 database as a supplemental thermochemistry training set, and the resulting method reproduces the quality of MP2-V(terfc, aTZ) for noncovalent interactions and exceeds the performance of SCS-MP2/aTZ for thermochemistry. PMID:26575911

  9. What is the benefit of CT-based attenuation correction in myocardial perfusion SPET?

    PubMed

    Apostolopoulos, Dimitrios J; Savvopoulos, Christos

    2016-01-01

    In multimodality imaging, CT-derived transmission maps are used for attenuation correction (AC) of SPET or PET data. Regarding SPET myocardial perfusion imaging (MPI), however, the bene����t of CT-based AC (CT-AC) has been questioned. Although most attenuation-related artifacts are removed by this technique, new false defects may appear while some true perfusion abnormalities may be masked. The merits and the drawbacks of CT-AC in MPI SPET are reviewed and discussed in this editorial. In conclusion, CT-AC is most helpful in men, overweight in particular, and in those with low or low to intermediate pre-test probability of coronary artery disease (CAD). It is also useful for the evaluation of myocardial viability. In high-risk patients though, CT-AC may underestimate the presence or the extent of CAD. In any case, corrected and non-corrected images should be viewed side-by-side and both considered in the interpretation of the study. PMID:27331200

  10. Filter Paper: Solution to High Self-Attenuation Corrections in HEPA Filter Measurements

    SciTech Connect

    Oberer, R.B.; Harold, N.B.; Gunn, C.A.; Brummett, M.; Chaing, L.G.

    2005-10-01

    An 8 by 8 by 6 inch High Efficiency Particulate Air (HEPA) filter was measured as part of a uranium holdup survey in June of 2005 as it has been routinely measured every two months since 1998. Although the survey relies on gross gamma count measurements, this was one of a few measurements that had been converted to a quantitative measurement in 1998. The measurement was analyzed using the traditional Generalized Geometry Holdup (GGH) approach, using HMS3 software, with an area calibration and self-attenuation corrected with an empirical correction factor of 1.06. A result of 172 grams of {sup 235}U was reported. The actual quantity of {sup 235}U in the filter was approximately 1700g. Because of this unusually large discrepancy, the measurement of HEPA filters will be discussed. Various techniques for measuring HEPA filters will be described using the measurement of a 24 by 24 by 12 inch HEPA filter as an example. A new method to correct for self attenuation will be proposed for this measurement Following the discussion of the 24 by 24 by 12 inch HEPA filter, the measurement of the 8 by 8 by 6 inch will be discussed in detail.

  11. Attenuation correction in emission tomography using the emission data—A review

    PubMed Central

    Li, Yusheng

    2016-01-01

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy

  12. Correcting infrared satellite estimates of sea surface temperature for atmospheric water vapor attenuation

    NASA Technical Reports Server (NTRS)

    Emery, William J.; Yu, Yunyue; Wick, Gary A.; Schluessel, Peter; Reynolds, Richard W.

    1994-01-01

    A new satellite sea surface temperature (SST) algorithm is developed that uses nearly coincident measurements from the microwave special sensor microwave imager (SSM/I) to correct for atmospheric moisture attenuation of the infrared signal from the advanced very high resolution radiometer (AVHRR). This new SST algorithm is applied to AVHRR imagery from the South Pacific and Norwegian seas, which are then compared with simultaneous in situ (ship based) measurements of both skin and bulk SST. In addition, an SST algorithm using a quadratic product of the difference between the two AVHRR thermal infrared channels is compared with the in situ measurements. While the quadratic formulation provides a considerable improvement over the older cross product (CPSST) and multichannel (MCSST) algorithms, the SSM/I corrected SST (called the water vapor or WVSST) shows overall smaller errors when compared to both the skin and bulk in situ SST observations. Applied to individual AVHRR images, the WVSST reveals an SST difference pattern (CPSST-WVSST) similar in shape to the water vapor structure while the CPSST-quadratic SST difference appears unrelated in pattern to the nearly coincident water vapor pattern. An application of the WVSST to week-long composites of global area coverage (GAC) AVHRR data demonstrates again the manner in which the WVSST corrects the AVHRR for atmospheric moisture attenuation. By comparison the quadratic SST method underestimates the SST corrections in the lower latitudes and overestimates the SST in th e higher latitudes. Correlations between the AVHRR thermal channel differences and the SSM/I water vapor demonstrate the inability of the channel difference to represent water vapor in the midlatitude and high latitudes during summer. Compared against drifting buoy data the WVSST and the quadratic SST both exhibit the same general behavior with the relatively small differences with the buoy temperatures.

  13. Reference Value Provision Schemes for Attenuation Correction of Full-Waveform Airborne Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Richter, K.; Blaskow, R.; Stelling, N.; Maas, H.-G.

    2015-08-01

    The characterization of the vertical forest structure is highly relevant for ecological research and for better understanding forest ecosystems. Full-waveform airborne laser scanner systems providing a complete time-resolved digitization of every laser pulse echo may deliver very valuable information on the biophysical structure in forest stands. To exploit the great potential offered by full-waveform airborne laser scanning data, the development of suitable voxel based data analysis methods is straightforward. Beyond extracting additional 3D points, it is very promising to derive voxel attributes from the digitized waveform directly. However, the 'history' of each laser pulse echo is characterized by attenuation effects caused by reflections in higher regions of the crown. As a result, the received waveform signals within the canopy have a lower amplitude than it would be observed for an identical structure without the previous canopy structure interactions (Romanczyk et al., 2012). To achieve a radiometrically correct voxel space representation, the loss of signal strength caused by partial reflections on the path of a laser pulse through the canopy has to be compensated by applying suitable attenuation correction models. The basic idea of the correction procedure is to enhance the waveform intensity values in lower parts of the canopy for portions of the pulse intensity, which have been reflected in higher parts of the canopy. To estimate the enhancement factor an appropriate reference value has to be derived from the data itself. Based on pulse history correction schemes presented in previous publications, the paper will discuss several approaches for reference value estimation. Furthermore, the results of experiments with two different data sets (leaf-on/leaf-off) are presented.

  14. Evaluation of the Effect of Attenuation Correction by External CT in a Semiconductor SPECT.

    PubMed

    Uchibe, Taku; Miyai, Masahiro; Yata, Nobuhiro; Haramoto, Masuo; Yamamoto, Yasushi; Nakamura, Megumi; Kitagaki, Hajime; Takahashi, Yasuyuki

    2016-07-01

    The discovery of NM530c with a cadmium-zinc-telluride detector (CdZnTe-SPECT) is superior to the conventional Anger-type SPECT with a sodium-iodide detector (NaI-SPECT) in terms of sensitivity and spatial resolution. However, in the clinical example, even in CdZnTe-SPECT, a count decrease in myocardium due to the attenuation of the gamma ray is an issue. This study was conducted to evaluate the effect of computed tomography attenuation correction (CTAC) in CdZnTe-SPECT with the help of external CT. We evaluated the revision effect of uniformity, influence by the difference in attenuation distance, contrast ratio, an uptake rate using the heart phantom. As a result of the phantom studies, a good revision effect was obtained. In the clinical study, there was a statistical significant difference between the contrast ratio before and after CTAC in the inferior wall. In addition, the contrast ratio before and after CTAC in CdZnTe-SPECT image was equal to those of NaI-SPECT image. It was suggested that CTAC using external CT in CdZnTe-SPECT was clinically useful for inferior wall. PMID:27440705

  15. Towards improved hardware component attenuation correction in PET/MR hybrid imaging

    NASA Astrophysics Data System (ADS)

    Paulus, D. H.; Tellmann, L.; Quick, H. H.

    2013-11-01

    In positron emission tomography/computed tomography (PET/CT) hybrid imaging attenuation correction (AC) of the patient tissue and patient table is performed by converting the CT-based Hounsfield units (HU) to linear attenuation coefficients (LAC) of PET. When applied to the new field of hardware component AC in PET/magnetic resonance (MR) hybrid imaging, this conversion method may result in local overcorrection of PET activity values. The aim of this study thus was to optimize the conversion parameters for CT-based AC of hardware components in PET/MR. Systematic evaluation and optimization of the HU to LAC conversion parameters has been performed for the hardware component attenuation map (µ-map) of a flexible radiofrequency (RF) coil used in PET/MR imaging. Furthermore, spatial misregistration of this RF coil to its µ-map was simulated by shifting the µ-map in different directions and the effect on PET quantification was evaluated. Measurements of a PET NEMA standard emission phantom were performed on an integrated hybrid PET/MR system. Various CT parameters were used to calculate different µ-maps for the flexible RF coil and to evaluate the impact on the PET activity concentration. A 511 keV transmission scan of the local RF coil was used as standard of reference to adapt the slope of the conversion from HUs to LACs at 511 keV. The average underestimation of the PET activity concentration due to the non-attenuation corrected RF coil in place was calculated to be 5.0% in the overall phantom. When considering attenuation only in the upper volume of the phantom, the average difference to the reference scan without RF coil is 11.0%. When the PET/CT conversion is applied, an average overestimation of 3.1% (without extended CT scale) and 4.2% (with extended CT scale) is observed in the top volume of the NEMA phantom. Using the adapted conversion resulting from this study, the deviation in the top volume of the phantom is reduced to -0.5% and shows the lowest

  16. A revision factor to the Cutshall self-attenuation correction in (210)Pb gamma-spectrometry measurements.

    PubMed

    Jodłowski, Paweł

    2016-03-01

    The Cutshall transmission method of determination of self-attenuation correction in (210)Pb measurements by gamma-spectrometry gives the results burdened with errors of up to 10%. The author proposes introducing into the Cutshall correction Cs,Cuts an additional revision factor CCs,Cuts to eliminate errors. The proposed formula of the revision factor describes the CCs,Cuts value depending on the experimentally obtained Cs,Cuts correction. Formula holds true in wide ranges of the measurement geometries and linear attenuation coefficients of both the standard and the sample. PMID:26702546

  17. Accurate and precise measurement of oxygen isotopic fractions and diffusion profiles by selective attenuation of secondary ions (SASI).

    PubMed

    Téllez, Helena; Druce, John; Hong, Jong-Eun; Ishihara, Tatsumi; Kilner, John A

    2015-03-01

    The accuracy and precision of isotopic analysis in Time-of-Flight secondary ion mass spectrometry (ToF-SIMS) relies on the appropriate reduction of the dead-time and detector saturation effects, especially when analyzing species with high ion yields or present in high concentrations. Conventional approaches to avoid these problems are based on Poisson dead-time correction and/or an overall decrease of the total secondary ion intensity by reducing the target current. This ultimately leads to poor detection limits for the minor isotopes and high uncertainties of the measured isotopic ratios. An alternative strategy consists of the attenuation of those specific secondary ions that saturate the detector, providing an effective extension of the linear dynamic range. In this work, the selective attenuation of secondary ion signals (SASI) approach is applied to the study of oxygen transport properties in electroceramic materials by isotopic labeling with stable (18)O tracer and ToF-SIMS depth profiling. The better analytical performance in terms of accuracy and precision allowed a more reliable determination of the oxygen surface exchange and diffusion coefficients while maintaining good mass resolution and limits of detection for other minor secondary ion species. This improvement is especially relevant to understand the ionic transport mechanisms and properties of solid materials, such as the parallel diffusion pathways (e.g., oxygen diffusion through bulk, grain boundary, or dislocations) in electroceramic materials with relevant applications in energy storage and conversion devices. PMID:25647357

  18. Effect of Non-Alignment/Alignment of Attenuation Map Without/With Emission Motion Correction in Cardiac SPECT/CT

    PubMed Central

    Dey, Joyoni; Segars, W. Paul; Pretorius, P. Hendrik; King, Michael A.

    2015-01-01

    Purpose We investigate the differences without/with respiratory motion correction in apparent imaging agent localization induced in reconstructed emission images when the attenuation maps used for attenuation correction (from CT) are misaligned with the patient anatomy during emission imaging due to differences in respiratory state. Methods We investigated use of attenuation maps acquired at different states of a 2 cm amplitude respiratory cycle (at end-expiration, at end-inspiration, the center map, the average transmission map, and a large breath-hold beyond range of respiration during emission imaging) to correct for attenuation in MLEM reconstruction for several anatomical variants of the NCAT phantom which included both with and without non-rigid motion between heart and sub-diaphragmatic regions (such as liver, kidneys etc). We tested these cases with and without emission motion correction and attenuation map alignment/non-alignment. Results For the NCAT default male anatomy the false count-reduction due to breathing was largely removed upon emission motion correction for the large majority of the cases. Exceptions (for the default male) were for the cases when using the large-breathhold end-inspiration map (TI_EXT), when we used the end-expiration (TE) map, and to a smaller extent, the end-inspiration map (TI). However moving the attenuation maps rigidly to align the heart region, reduced the remaining count-reduction artifacts. For the female patient count-reduction remained post motion correction using rigid map-alignment due to the breast soft-tissue misalignment. Quantitatively, after the transmission (rigid) alignment correction, the polar-map 17-segment RMS error with respect to the reference (motion-less case) reduced by 46.5% on average for the extreme breathhold case. The reductions were 40.8% for end-expiration map and 31.9% for end-inspiration cases on the average, comparable to the semi-ideal case where each state uses its own attenuation map for

  19. Differential Effects of Focused and Unfocused Written Correction on the Accurate Use of Grammatical Forms by Adult ESL Learners

    ERIC Educational Resources Information Center

    Sheen, Younghee; Wright, David; Moldawa, Anna

    2009-01-01

    Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…

  20. Attenuation correction in SPECT using consistency conditions for the exponential ray transform.

    PubMed

    Mennessier, C; Noo, F; Clackdoyle, R; Bal, G; Desbat, L

    1999-10-01

    Using data consistency conditions for the exponential ray transform, a method is derived to correct SPECT data for attenuation effects. No transmission measurements are required, and no operator-defined contours are needed. Furthermore, any 3D parallel-ray geometry can be considered for SPECT data acquisition, even unconventional geometries which do not lead to a set of 2D parallel-beam sinograms. The method is presented for both the 2D parallel-beam geometry and a particular 3D case, called the rotating slant hole geometry. Full details of the algorithms are given. Implementation has been carried out and results are presented in 2D and in 3D using simulated data. PMID:10533924

  1. Automatic Contour Detection Using a "Fixed-Point Hachimura-Kuwahara Filter" for SPECT Attenuation Correction.

    PubMed

    Minato, K; Tang, Y N; Bennett, G W; Brill, A

    1987-01-01

    Attenuation correction for single-photon emission computed tomography (SPECT) usually assumes a uniform attenuation distribution within the body surface contour. Previous methods to estimate this contour have used thresholding of a reconstructed section image. This method is often very sensitive to the selection of a threshold value, especially for nonuniform activity distributions within the body. We have proposed the "fixed-point Hachimura-Kuwahara filter" to extract contour primitives from SPECT images. The Hachimura-Kuwahara filter, which preserves edges but smoothes nonedge regions, is applied repeatedly to identify the invariant set-the fixed-point image-which is unchanged by this nonlinear, two-dimensional filtering operation. This image usually becomes a piecewise constant array. In order to detect the contour, the tracing algorithm based on the minimum distance connection criterion is applied to the extracted contour primitives. This procedure does not require choice of a threshold value in determining the contour. SPECT data from a water-filled elliptical phantom containing three sources was obtained and scattered projections were reconstructed. The automatic edge detection procedure was applied to the scattered window reconstruction, resulting in a reasonable outline of the phantom. PMID:18230438

  2. An analytical algorithm for skew-slit imaging geometry with nonuniform attenuation correction

    SciTech Connect

    Huang Qiu; Zeng, Gengsheng L.

    2006-04-15

    The pinhole collimator is currently the collimator of choice in small animal single photon emission computed tomography (SPECT) imaging because it can provide high spatial resolution and reasonable sensitivity when the animal is placed very close to the pinhole. It is well known that if the collimator rotates around the object (e.g., a small animal) in a circular orbit to form a cone-beam imaging geometry with a planar trajectory, the acquired data are not sufficient for an exact artifact-free image reconstruction. In this paper a novel skew-slit collimator is mounted instead of the pinhole collimator in order to significantly reduce the image artifacts caused by the geometry. The skew-slit imaging geometry is a more generalized version of the pinhole imaging geometry. The multiple pinhole geometry can also be extended to the multiple-skew-slit geometry. An analytical algorithm for image reconstruction based on the tilted fan-beam inversion is developed with nonuniform attenuation compensation. Numerical simulation shows that the axial artifacts are evidently suppressed in the skew-slit images compared to the pinhole images and the attenuation correction is effective.

  3. Evaluation and automatic correction of metal-implant-induced artifacts in MR-based attenuation correction in whole-body PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Schramm, G.; Maus, J.; Hofheinz, F.; Petr, J.; Lougovski, A.; Beuthien-Baumann, B.; Platzek, I.; van den Hoff, J.

    2014-06-01

    The aim of this paper is to describe a new automatic method for compensation of metal-implant-induced segmentation errors in MR-based attenuation maps (MRMaps) and to evaluate the quantitative influence of those artifacts on the reconstructed PET activity concentration. The developed method uses a PET-based delineation of the patient contour to compensate metal-implant-caused signal voids in the MR scan that is segmented for PET attenuation correction. PET emission data of 13 patients with metal implants examined in a Philips Ingenuity PET/MR were reconstructed with the vendor-provided method for attenuation correction (MRMaporig, PETorig) and additionally with a method for attenuation correction (MRMapcor, PETcor) developed by our group. MRMaps produced by both methods were visually inspected for segmentation errors. The segmentation errors in MRMaporig were classified into four classes (L1 and L2 artifacts inside the lung and B1 and B2 artifacts inside the remaining body depending on the assigned attenuation coefficients). The average relative SUV differences (\\varepsilon _{rel}^{av}) between PETorig and PETcor of all regions showing wrong attenuation coefficients in MRMaporig were calculated. Additionally, relative SUVmean differences (ɛrel) of tracer accumulations in hot focal structures inside or in the vicinity of these regions were evaluated. MRMaporig showed erroneous attenuation coefficients inside the regions affected by metal artifacts and inside the patients' lung in all 13 cases. In MRMapcor, all regions with metal artifacts, except for the sternum, were filled with the soft-tissue attenuation coefficient and the lung was correctly segmented in all patients. MRMapcor only showed small residual segmentation errors in eight patients. \\varepsilon _{rel}^{av} (mean ± standard deviation) were: ( - 56 ± 3)% for B1, ( - 43 ± 4)% for B2, (21 ± 18)% for L1, (120 ± 47)% for L2 regions. ɛrel (mean ± standard deviation) of hot focal structures were

  4. Determination of gamma-ray self-attenuation correction in environmental samples by combining transmission measurements and Monte Carlo simulations.

    PubMed

    Šoštarić, Marko; Babić, Dinko; Petrinec, Branko; Zgorelec, Željka

    2016-07-01

    We develop a simple and widely applicable method for determining the self-attenuation correction in gamma-ray spectrometry on environmental samples. The method relies on measurements of the transmission of photons over the matrices of a calibration standard and an analysed sample. Results of this experiment are used in subsequent Monte Carlo simulations in which we first determine the linear attenuation coefficients (μ) of the two matrices and then the self-attenuation correction for the analysed sample. The method is validated by reproducing, over a wide energy range, the literature data for the μ of water. We demonstrate the use of the method on a sample of sand, for which we find that the correction is considerable below ~400keV, where many naturally occurring radionuclides emit gamma rays. At the lowest inspected energy (~60keV), one measures an activity that is by a factor of ~1.8 smaller than its true value. PMID:27157125

  5. Method for transforming CT images for attenuation correction in PET/CT imaging

    SciTech Connect

    Carney, Jonathan P.J.; Townsend, David W.; Rappoport, Vitaliy; Bendriem, Bernard

    2006-04-15

    A tube-voltage-dependent scheme is presented for transforming Hounsfield units (HU) measured by different computed tomography (CT) scanners at different x-ray tube voltages (kVp) to 511 keV linear attenuation values for attenuation correction in positron emission tomography (PET) data reconstruction. A Gammex 467 electron density CT phantom was imaged using a Siemens Sensation 16-slice CT, a Siemens Emotion 6-slice CT, a GE Lightspeed 16-slice CT, a Hitachi CXR 4-slice CT, and a Toshiba Aquilion 16-slice CT at kVp ranging from 80 to 140 kVp. All of these CT scanners are also available in combination with a PET scanner as a PET/CT tomograph. HU obtained for various reference tissue substitutes in the phantom were compared with the known linear attenuation values at 511 keV. The transformation, appropriate for lung, soft tissue, and bone, yields the function 9.6x10{sup -5}{center_dot}(HU+1000) below a threshold of {approx}50 HU and a{center_dot}(HU+1000)+b above the threshold, where a and b are fixed parameters that depend on the kVp setting. The use of the kVp-dependent scaling procedure leads to a significant improvement in reconstructed PET activity levels in phantom measurements, resolving errors of almost 40% otherwise seen for the case of dense bone phantoms at 80 kVp. Results are also presented for patient studies involving multiple CT scans at different kVp settings, which should all lead to the same 511 keV linear attenuation values. A linear fit to values obtained from 140 kVp CT images using the kVp-dependent scaling plotted as a function of the corresponding values obtained from 80 kVp CT images yielded y=1.003x-0.001 with an R{sup 2} value of 0.999, indicating that the same values are obtained to a high degree of accuracy.

  6. Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography

    USGS Publications Warehouse

    Liu, J.; Xia, J.; Chen, C.; Zhang, G.

    2005-01-01

    The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.

  7. Towards improved hardware component attenuation correction in PET/MR hybrid imaging.

    PubMed

    Paulus, D H; Tellmann, L; Quick, H H

    2013-11-21

    In positron emission tomography/computed tomography (PET/CT) hybrid imaging attenuation correction (AC) of the patient tissue and patient table is performed by converting the CT-based Hounsfield units (HU) to linear attenuation coefficients (LAC) of PET. When applied to the new field of hardware component AC in PET/magnetic resonance (MR) hybrid imaging, this conversion method may result in local overcorrection of PET activity values. The aim of this study thus was to optimize the conversion parameters for CT-based AC of hardware components in PET/MR. Systematic evaluation and optimization of the HU to LAC conversion parameters has been performed for the hardware component attenuation map (µ-map) of a flexible radiofrequency (RF) coil used in PET/MR imaging. Furthermore, spatial misregistration of this RF coil to its µ-map was simulated by shifting the µ-map in different directions and the effect on PET quantification was evaluated. Measurements of a PET NEMA standard emission phantom were performed on an integrated hybrid PET/MR system. Various CT parameters were used to calculate different µ-maps for the flexible RF coil and to evaluate the impact on the PET activity concentration. A 511 keV transmission scan of the local RF coil was used as standard of reference to adapt the slope of the conversion from HUs to LACs at 511 keV. The average underestimation of the PET activity concentration due to the non-attenuation corrected RF coil in place was calculated to be 5.0% in the overall phantom. When considering attenuation only in the upper volume of the phantom, the average difference to the reference scan without RF coil is 11.0%. When the PET/CT conversion is applied, an average overestimation of 3.1% (without extended CT scale) and 4.2% (with extended CT scale) is observed in the top volume of the NEMA phantom. Using the adapted conversion resulting from this study, the deviation in the top volume of the phantom is reduced to -0.5% and shows the lowest

  8. Attenuation of near-surface diffracted energy in deep seismic data by DMO correction

    NASA Astrophysics Data System (ADS)

    Klinkby, Lone; Pedersen, Morten Wendell

    1998-03-01

    Seismic data are often contaminated by scattered waves from shallow diffractors such as offshore installations and structural irregularities. As the waves travel in the water layer and shallow sub-bottom they are damped considerably less than near-vertical reflected waves. Far from the diffractors the stacking velocity of the noise will be nearly identical to the stacking velocities of the primary reflections. This implies that CMP stacking of normal-moveout corrected data does not suppress the noise, and the necessary attenuation will typically be done separately by prestack 2D velocity-filtering or array simulation in the shot and receiver domains. However, by changing the moveout of the diffraction curves through DMO correction, the stacking velocity of the noise will be close to the true velocity of the diffracted waves, and suppression through CMP stacking is possible. As DMO is related to CDP smearing, which is a marginal problem for deep seismic data, it is normally not used as a part of the standard processing schemes. The noise suppression features of the DMO processor are demonstrated with a data example from the North Sea.

  9. Motion-compensated PET image reconstruction with respiratory-matched attenuation correction using two low-dose inhale and exhale CT images

    NASA Astrophysics Data System (ADS)

    Nam, Woo Hyun; Ahn, Il Jun; Kim, Kyeong Min; Kim, Byung Il; Ra, Jong Beom

    2013-10-01

    Positron emission tomography (PET) is widely used for diagnosis and follow up assessment of radiotherapy. However, thoracic and abdominal PET suffers from false staging and incorrect quantification of the radioactive uptake of lesion(s) due to respiratory motion. Furthermore, respiratory motion-induced mismatch between a computed tomography (CT) attenuation map and PET data often leads to significant artifacts in the reconstructed PET image. To solve these problems, we propose a unified framework for respiratory-matched attenuation correction and motion compensation of respiratory-gated PET. For the attenuation correction, the proposed algorithm manipulates a 4D CT image virtually generated from two low-dose inhale and exhale CT images, rather than a real 4D CT image which significantly increases the radiation burden on a patient. It also utilizes CT-driven motion fields for motion compensation. To realize the proposed algorithm, we propose an improved region-based approach for non-rigid registration between body CT images, and we suggest a selection scheme of 3D CT images that are respiratory-matched to each respiratory-gated sinogram. In this work, the proposed algorithm was evaluated qualitatively and quantitatively by using patient datasets including lung and/or liver lesion(s). Experimental results show that the method can provide much clearer organ boundaries and more accurate lesion information than existing algorithms by utilizing two low-dose CT images.

  10. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom

    PubMed Central

    Saha, Krishnendu; Hoyt, Sean C.; Murray, Bryon M.

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  11. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom.

    PubMed

    Saha, Krishnendu; Hoyt, Sean C; Murray, Bryon M

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  12. Self-attenuation correction factors for bioindicators measured by γ spectrometry for energies <100 keV

    NASA Astrophysics Data System (ADS)

    Manduci, L.; Tenailleau, L.; Trolet, J. L.; De Vismes, A.; Lopez, G.; Piccione, M.

    2010-01-01

    The mass attenuation coefficients for a number of marine and terrestrial bioindicators were measured using γ spectrometry for energies between 22 and 80 keV. These values were then used to find the correction factor k for the apparent radioactivity. The experimental results were compared with a Monte Carlo simulation performed using PENELOPE in order to evaluate the reliability of the simplified calculation and to determine the correction factors.

  13. Improved UTE-based attenuation correction for cranial PET-MR using dynamic magnetic field monitoring

    SciTech Connect

    Aitken, A. P.; Giese, D.; Tsoumpas, C.; Schleyer, P.; Kozerke, S.; Prieto, C.; Schaeffter, T.

    2014-01-15

    Purpose: Ultrashort echo time (UTE) MRI has been proposed as a way to produce segmented attenuation maps for PET, as it provides contrast between bone, air, and soft tissue. However, UTE sequences require samples to be acquired during rapidly changing gradient fields, which makes the resulting images prone to eddy current artifacts. In this work it is demonstrated that this can lead to misclassification of tissues in segmented attenuation maps (AC maps) and that these effects can be corrected for by measuring the true k-space trajectories using a magnetic field camera. Methods: The k-space trajectories during a dual echo UTE sequence were measured using a dynamic magnetic field camera. UTE images were reconstructed using nominal trajectories and again using the measured trajectories. A numerical phantom was used to demonstrate the effect of reconstructing with incorrect trajectories. Images of an ovine leg phantom were reconstructed and segmented and the resulting attenuation maps were compared to a segmented map derived from a CT scan of the same phantom, using the Dice similarity measure. The feasibility of the proposed method was demonstrated inin vivo cranial imaging in five healthy volunteers. Simulated PET data were generated for one volunteer to show the impact of misclassifications on the PET reconstruction. Results: Images of the numerical phantom exhibited blurring and edge artifacts on the bone–tissue and air–tissue interfaces when nominal k-space trajectories were used, leading to misclassification of soft tissue as bone and misclassification of bone as air. Images of the tissue phantom and thein vivo cranial images exhibited the same artifacts. The artifacts were greatly reduced when the measured trajectories were used. For the tissue phantom, the Dice coefficient for bone in MR relative to CT was 0.616 using the nominal trajectories and 0.814 using the measured trajectories. The Dice coefficients for soft tissue were 0.933 and 0.934 for the

  14. In-situ Attenuation Corrections for Radiation Force Measurements of High Frequency Ultrasound With a Conical Target.

    PubMed

    Fick, Steven E; Ruggles, Dorea

    2006-01-01

    Radiation force balance (RFB) measurements of time-averaged, spatially-integrated ultrasound power transmitted into a reflectionless water load are based on measurements of the power received by the RFB target. When conical targets are used to intercept the output of collimated, circularly symmetric ultrasound sources operating at frequencies above a few megahertz, the correction for in-situ attenuation is significant, and differs significantly from predictions for idealized circumstances. Empirical attenuation correction factors for a 45° (half-angle) absorptive conical RFB target have been determined for 24 frequencies covering the 5 MHz to 30 MHz range. They agree well with previously unpublished attenuation calibration factors determined in 1994 for a similar target. PMID:27274946

  15. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    SciTech Connect

    Nyflot, Matthew J.; Lee, Tzu-Cheng; Alessio, Adam M.; Kinahan, Paul E.; Wollenweber, Scott D.; Stearns, Charles W.; Bowen, Stephen R.

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  16. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  17. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons.

    PubMed

    Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533

  18. Accurate Treatment of Large Supramolecular Complexes by Double-Hybrid Density Functionals Coupled with Nonlocal van der Waals Corrections.

    PubMed

    Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan

    2015-03-10

    In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747

  19. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  20. Ultra-low dose CT attenuation correction for PET/CT

    NASA Astrophysics Data System (ADS)

    Xia, Ting; Alessio, Adam M.; De Man, Bruno; Manjeshwar, Ravindra; Asma, Evren; Kinahan, Paul E.

    2012-01-01

    A challenge for positron emission tomography/computed tomography (PET/CT) quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently available, lowest dose CT techniques, extended duration cine CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging.

  1. Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs

    NASA Astrophysics Data System (ADS)

    Išgum, Ivana; de Vos, Bob D.; Wolterink, Jelmer M.; Dey, Damini; Berman, Daniel S.; Rubeaux, Mathieu; Leiner, Tim; Slomka, Piotr J.

    2016-03-01

    CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.

  2. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  3. An experimental correction proposed for an accurate determination of mass diffusivity of wood in steady regime

    NASA Astrophysics Data System (ADS)

    Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick

    2002-08-01

    This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.

  4. Accurate estimation of sea surface temperatures using dissolution-corrected calibrations for Mg/Ca paleothermometry

    NASA Astrophysics Data System (ADS)

    Rosenthal, Yair; Lohmann, George P.

    2002-09-01

    Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.

  5. Diffraction, attenuation, and source corrections for nonlinear Rayleigh wave ultrasonic measurements.

    PubMed

    Torello, David; Thiele, Sebastian; Matlack, Kathryn H; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J

    2015-02-01

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β11 is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β11(7075)/β11(2024) measure of 1.363 agrees well with previous literature and earlier work. The proposed work is also applied to a set of 2205 duplex stainless steel specimens that underwent various degrees of heat-treatment over 24h, and the results improve upon conclusions drawn from previous analysis. PMID:25287976

  6. SPECT attenuation correction: an essential tool to realize nuclear cardiology's manifest destiny.

    PubMed

    Garcia, Ernest V

    2007-01-01

    Single photon emission computed tomography (SPECT) myocardial perfusion imaging has attained widespread clinical acceptance as a standard of care for cardiac patients. Yet, physical phenomena degrade the accuracy of how our cardiac images are visually interpreted or quantitatively analyzed. This degradation results in cardiac images in which brightness or counts are not necessarily linear with tracer uptake or myocardial perfusion. Attenuation correction (AC) is a methodology that has evolved over the last 30 years to compensate for this degradation. Numerous AC clinical trials over the last 10 years have shown increased diagnostic accuracy over non-AC SPECT for detecting and localizing coronary artery disease, particularly for significantly increasing specificity and normalcy rate. This overwhelming evidence has prompted our professional societies to issue a joint position statement in 2004 recommending the use of AC to maximize SPECT diagnostic accuracy and clinical usefulness. Phantom and animal studies have convincingly shown how SPECT AC recovers the true regional myocardial activity concentration, while non-AC SPECT does not. Thus, AC is also an essential tool for extracting quantitative parameters from all types of cardiac radionuclide distributions, and plays an important role in establishing cardiac SPECT for flow, metabolic, innervation, and molecular imaging, our manifest destiny. PMID:17276302

  7. Attenuation correction of PET images with interpolated average CT for thoracic tumors

    NASA Astrophysics Data System (ADS)

    Huang, Tzung-Chi; Mok, Greta S. P.; Wang, Shyh-Jen; Wu, Tung-Hsin; Zhang, Geoffrey

    2011-04-01

    To reduce positron emission tomography (PET) and computed tomography (CT) misalignments and standardized uptake value (SUV) errors, cine average CT (CACT) has been proposed to replace helical CT (HCT) for attenuation correction (AC). A new method using interpolated average CT (IACT) for AC is introduced to further reduce radiation dose with similar image quality. Six patients were recruited in this study. The end-inspiration and -expiration phases from cine CT were used as the two original phases. Deformable image registration was used to generate the interpolated phases. The IACT was calculated by averaging the original and interpolated phases. The PET images were then reconstructed with AC using CACT, HCT and IACT, respectively. Their misalignments were compared by visual assessment, mutual information, correlation coefficient and SUV. The doses from different CT maps were analyzed. The misalignments were reduced for CACT and IACT as compared to HCT. The maximum SUV difference between the use of IACT and CACT was ~3%, and it was ~20% between the use of HCT and CACT. The estimated dose for IACT was 0.38 mSv. The radiation dose using IACT could be reduced by 85% compared to the use of CACT. IACT is a good low-dose approximation of CACT for AC.

  8. Correcting errors in the optical path difference in Fourier spectroscopy: a new accurate method.

    PubMed

    Kauppinen, J; Kärkköinen, T; Kyrö, E

    1978-05-15

    A new computational method for calculating and correcting the errors of the optical path difference in Fourier spectrometers is presented. This method only requires an one-sided interferogram and a single well-separated line in the spectrum. The method also cancels out the linear phase error. The practical theory of the method is included, and an example of the progress of the method is illustrated by simulations. The method is also verified by several simulations in order to estimate its usefulness and accuracy. An example of the use of this method in practice is also given. PMID:20198027

  9. Calibration of an HPGe detector and self-attenuation correction for 210Pb: Verification by alpha spectrometry of 210Po in environmental samples

    NASA Astrophysics Data System (ADS)

    Saïdou; Bochud, François; Laedermann, Jean-Pascal; Buchillier, Thierry; Njock Moïse, Kwato; Froidevaux, Pascal

    2007-08-01

    In this work the calibration of an HPGe detector for 210Pb measurement is realised by a liquid standard source and the determination of this radionuclide in solid environmental samples by gamma spectrometry takes into account a correction factor for self-attenuation of its 46.5 keV line. Experimental, theoretical and Monte Carlo investigations are undertaken to evaluate self-attenuation for cylindrical sample geometry. To validate this correction factor, 210Po (at equilibrium with 210Pb) alpha spectrometry procedure using microwave acid digestion under pressure is developed and proposed. The different self-attenuation correction methods are in coherence, and corrected 210Pb activities are in good agreement with the results of 210Po. Finally, self-attenuation corrections are proposed for environmental solid samples whose density ranges between 0.8 and 1.4 g/cm 3 and whose mass attenuation coefficient is around 0.4 cm 2/g.

  10. Asymmetrical-fan tranmission CT on SPECT to derive {mu}-maps for attenuation correction

    SciTech Connect

    Loncaric, S.; Huang, G.; Ni, B.

    1994-05-01

    For proper attenuation correction of SPECT images, an appropriate {mu}-map properly registered with each imaging slices is needed. Among the many techniques for {mu}-map derivation, simultaneous or sequential fan-beam transmission CT (TCT), on the same SPECT system with the same acquisition settings, have advantages of being practical while ensuring registration. However, the problems are: (1) limited FOV for thoracic imaging, projection would be truncated with a typical size detector, (2) lack of room for placing the transmission source in many SPECT systems. We have developed a new sampling scheme to solve the problems mentioned above. This scheme uses an asymmetrical-fan geometry (AFG), which samples only half of the field, the other half would be sampled after an 180{degrees} detector rotation. This technique completes the minimum sampling requirement in a 360{degrees} detector rotation and yields a relatively large FOV defined by the outside edge of the sampling fan. We have confirmed the feasibility of the AFG sampling on a 3-head SPECT system to provide a large FOV for TCT of most patient. The TCT sampling scheme is achieved with an asymmetrical-fan collimator. We have developed the required new reconstruction algorithms and derived excellent reconstructed images of phantoms and human subjects. We propose to have this technique implemented in a short and fast transmission scan in a multi-head SPECT system, after emission imaging, because the detectors have to be pulled out to make room for the transmission source. The imaging field can even exceed the full field size of the detector. MS would be possible when an obtuse sampling fan is formed by shifting the source outward further, provided the central FOV is properly covered with a supplementary sampling scheme, e.g., using another TCT with a fan-beam collimator on another one of the detectors.

  11. Multi-centre analysis of incidental findings on low-resolution CT attenuation correction images

    PubMed Central

    Lawson, R; Kane, T; Elias, M; Howes, A; Birchall, J; Hogg, P

    2014-01-01

    Objective: To review new incidental findings detected on low-resolution CT attenuation correction (CTAC) images acquired during single-photon emission CT (SPECT-CT) myocardial perfusion imaging (MPI) and to determine whether the CTAC images had diagnostic value and warrant reporting. Methods: A multicentre study was performed in four UK nuclear medicine departments. CTAC images acquired as part of MPI performed using SPECT were evaluated to identify incidental findings. New findings considered to be clinically significant were evaluated further. Positive predictive value (PPV) was determined at the time of definitive diagnosis. Results: Of 1819 patients studied, 497 (27.3%) had a positive CTAC finding. 51 (2.8%) patients had findings that were clinically significant at the time of the CTAC report and had not been previously diagnosed. Only four (0.2%) of these were potentially detrimental to patient outcome. Conclusion: One centre had a PPV of 0%, and the study suggests that these CTAC images should not be reported. Two centres with more modern equipment had low PPVs of 0% and 6%, respectively, and further research is suggested prior to drawing a conclusion. The centre with best quality CT had a PPV of 67%, and the study suggests that CTAC images from this equipment should be reported. Advances in knowledge: This study is unique compared with previous studies that have reported only the potential to identify incidental findings on low-resolution CT images. This study both identifies and evaluates new clinically significant incidental findings, and it demonstrates that the benefit of reporting the CTAC images depends on the type of equipment used. PMID:25135310

  12. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    SciTech Connect

    Brady, Samuel L.; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.

  13. Distribution of high-stability 10 GHz local oscillator over 100 km optical fiber with accurate phase-correction system.

    PubMed

    Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng

    2014-02-15

    We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70  dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π. PMID:24562233

  14. Attenuation correction for the assay of uranium(VI) solutions in large cylindrical containers by gamma ray spectrometry.

    PubMed

    Patra, Sabyasachi; Agarwal, Chhavi; Gathibandhe, M; Goswami, A

    2013-07-01

    The Hybrid Monte Carlo method developed for attenuation correction has been extended for 500 ml cylindrical geometry. The method has been experimentally validated. Absolute efficiency studies for 500 ml aqueous, air and point source has been carried out using Monte Carlo simulation. It has been observed that point source efficiency is a good estimate of 500 ml source beyond sample-to-detector distance of 15 cm. It has been found that while HMC method for attenuation correction is valid at all sample-to-detector distances and over all transmittance range, the far-field and near-field formulae available in literature are valid only over a very narrow range of sample-to-detector distance. PMID:23523508

  15. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  16. The effect of metal artefact reduction on CT-based attenuation correction for PET imaging in the vicinity of metallic hip implants: A phantom study

    PubMed Central

    Harnish, Roy; Prevrhal, Sven; Alavi, Abass; Zaidi, Habib; Lang, Thomas

    2014-01-01

    Background To determine if metal artefact reduction (MAR) combined with a priori knowledge of prosthesis material composition can be applied to obtain CT-based attenuation maps with sufficient accuracy for quantitative assessment of 18F-fluorodeoxyglucose uptake in lesions near metallic prostheses. Methods A custom hip prosthesis phantom with a lesion-sized cavity filled with 0.2 ml 18F-FDG solution having an activity of 3.367 MBq adjacent to a prosthesis bore was imaged twice with a chrome-cobalt steel hip prosthesis and a plastic replica, respectively. Scanning was performed on a clinical hybrid PET/CT system equipped with an additional external 137Cs transmission source. PET emission images were reconstructed from both phantom configurations with CT-based attenuation correction (CTAC) and with CT-based attenuation correction using MAR (MARCTAC). To compare results with the attenuation-correction method extant prior to the advent of PET/CT, we also carried out attenuation correction with 137Cs transmission-based attenuation correction (TXAC). CTAC and MARCTAC images were scaled to attenuation coefficients at 511 keV using a tri-linear function that mapped the highest CT values to the prosthesis alloy attenuation coefficient. Accuracy and spatial distribution of the lesion activity was compared between the three reconstruction schemes. Results Compared to the reference activity of 3.37 MBq, the estimated activity quantified from the PET image corrected by TXAC was 3.41 MBq. The activity estimated from PET images corrected by MARCTAC was similar in accuracy at 3.32 MBq. CTAC corrected PET images resulted in nearly 40% overestimation of lesion activity at 4.70 MBq. Comparison of PET images obtained with the plastic and metal prostheses in place showed that CTAC resulted in a marked distortion of the 18F-FDG distribution within the lesion, whereas application of MARCTAC and TXAC resulted in lesion distributions similar to those observed with the plastic replica

  17. Attenuation correction of PET cardiac data with low-dose average CT in PET/CT

    SciTech Connect

    Pan Tinsu; Mawlawi, Osama; Luo, Dershan; Liu, Hui H.; Chi Paichun, M.; Mar, Martha V.; Gladish, Gregory; Truong, Mylene; Erasmus, Jeremy Jr.; Liao Zhongxing; Macapinlac, H. A.

    2006-10-15

    We proposed a low-dose average computer tomography (ACT) for attenuation correction (AC) of the PET cardiac data in PET/CT. The ACT was obtained from a cine CT scan of over one breath cycle per couch position while the patient was free breathing. We applied this technique on four patients who underwent tumor imaging with {sup 18}F-FDG in PET/CT, whose PET data showed high uptake of {sup 18}F-FDG in the heart and whose CT and PET data had misregistration. All four patients did not have known myocardiac infarction or ischemia. The patients were injected with 555-740 MBq of {sup 18}F-FDG and scanned 1 h after injection. The helical CT (HCT) data were acquired in 16 s for the coverage of 100 cm. The PET acquisition was 3 min per bed of 15 cm. The duration of cine CT acquisition per 2 cm was 5.9 s. We used a fast gantry rotation cycle time of 0.5 s to minimize motion induced reconstruction artifacts in the cine CT images, which were averaged to become the ACT images for AC of the PET data. The radiation dose was about 5 mGy for 5.9 s cine duration. The selection of 5.9 s was based on our analysis of the respiratory signals of 600 patients; 87% of the patients had average breath cycles of less than 6 s and 90% had standard deviations of less than 1 s in the period of breath cycle. In all four patient studies, registrations between the CT and the PET data were improved. An increase of average uptake in the anterior and the lateral walls up to 48% and a decrease of average uptake in the septal and the inferior walls up to 16% with ACT were observed. We also compared ACT and conventional slow scan CT (SSCT) of 4 s duration in one patient study and found ACT was better than SSCT in depicting average respiratory motion and the SSCT images showed motion-induced reconstruction artifacts. In conclusion, low-dose ACT improved registration of the CT and the PET data in the heart region in our study of four patients. ACT was superior than SSCT for depicting average respiration

  18. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  19. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  20. Attenuated second order Møller-Plesset perturbation theory: correcting finite basis set errors and infinite basis set inaccuracies

    NASA Astrophysics Data System (ADS)

    Goldey, Matthew; Head-Gordon, Martin

    2015-03-01

    Second order Møller-Plesset perturbation theory (MP2) in finite basis sets describes several classes of noncovalent interactions poorly due to basis set superposition error (BSSE) and underlying inaccurate physics for dispersion interactions. Attenuation of the Coulomb operator provides a direct path toward improving MP2 for noncovalent interactions. In limited basis sets, we demonstrate improvements in accuracy for intermolecular interactions with a three to five-fold reduction in RMS errors. For a range of inter- and intermolecular test cases, attenuated MP2 even outperforms complete basis set estimates of MP2. Finite basis attenuated MP2 is useful for inter- and intramolecular interactions where higher cost approaches are intractable. Extending this approach, recent research pairs attenuated MP2 with long-range correction to describe potential energy landscapes, and further results for large systems with noncovalent interactions are shown. This work was supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We acknowledge computational resources obtained under NSF Award CHE-1048789.

  1. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  2. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  3. Accurate non-Born-Oppenheimer calculations of the complete pure vibrational spectrum of D2 with including relativistic corrections.

    PubMed

    Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik

    2011-08-21

    In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559

  4. Application of the variational method for correction of wet ice attenuation for X-band dual-polarized radar

    NASA Astrophysics Data System (ADS)

    Tolstoy, Leonid

    In recent years there has been a huge interest in the development and use of dual-polarized radar systems operating at X-band (˜10 GHz) region of the electromagnetic spectrum. This is due to the fact that these systems are smaller and cheaper allowing for a network to be built, for example, for short range (typically < 30--40 km) hydrological applications. Such networks allow for higher cross-beam spatial resolutions while cheaper pedestals supporting a smaller antenna also allows for higher temporal resolution as compared with large S-band (long range) systems used by the National Weather Service. Dual-polarization radar techniques allow for correction of the strong attenuation of the electromagnetic radar signal due to rain at X-band and higher frequencies. However, practical attempts to develop reliable correction algorithms have been cumbered by the need to deal with the rather large statistical fluctuations or "noise" in the measured polarization parameters. Recently, the variational method was proposed, which overcomes this problem by using the forward model for polarization variables, and uses iterative approach to minimize the difference between modeled and observed values, in a least squares sense. This approach also allows for detection of hail and determination of the fraction of reflectivity due to the hail when the precipitation shaft is composed of a mixture of rain and hail. It was shown that this approach works well with S-band radar data. The purpose of this research is to extend the application of the variational method to the X-band dual-polarization radar data. The main objective is to correct for attenuation caused by rain mixed with wet ice hydrometeors (e.g., hail) in deep convection. The standard dual-polarization method of attenuation-correction using the differential propagation phase between H and V polarized waves cannot account for wet ice hydrometeors along the propagation path. The ultimate goal is to develop a feasible and robust

  5. Effects of CT-based attenuation correction of rat microSPECT images on relative myocardial perfusion and quantitative tracer uptake

    SciTech Connect

    Strydhorst, Jared H. Ruddy, Terrence D.; Wells, R. Glenn

    2015-04-15

    Purpose: Our goal in this work was to investigate the impact of CT-based attenuation correction on measurements of rat myocardial perfusion with {sup 99m}Tc and {sup 201}Tl single photon emission computed tomography (SPECT). Methods: Eight male Sprague-Dawley rats were injected with {sup 99m}Tc-tetrofosmin and scanned in a small animal pinhole SPECT/CT scanner. Scans were repeated weekly over a period of 5 weeks. Eight additional rats were injected with {sup 201}Tl and also scanned following a similar protocol. The images were reconstructed with and without attenuation correction, and the relative perfusion was analyzed with the commercial cardiac analysis software. The absolute uptake of {sup 99m}Tc in the heart was also quantified with and without attenuation correction. Results: For {sup 99m}Tc imaging, relative segmental perfusion changed by up to +2.1%/−1.8% as a result of attenuation correction. Relative changes of +3.6%/−1.0% were observed for the {sup 201}Tl images. Interscan and inter-rat reproducibilities of relative segmental perfusion were 2.7% and 3.9%, respectively, for the uncorrected {sup 99m}Tc scans, and 3.6% and 4.3%, respectively, for the {sup 201}Tl scans, and were not significantly affected by attenuation correction for either tracer. Attenuation correction also significantly increased the measured absolute uptake of tetrofosmin and significantly altered the relationship between the rat weight and tracer uptake. Conclusions: Our results show that attenuation correction has a small but statistically significant impact on the relative perfusion measurements in some segments of the heart and does not adversely affect reproducibility. Attenuation correction had a small but statistically significant impact on measured absolute tracer uptake.

  6. Influences of reconstruction and attenuation correction in brain SPECT images obtained by the hybrid SPECT/CT device: evaluation with a 3-dimensional brain phantom

    PubMed Central

    Akamatsu, Mana; Yamashita, Yasuo; Akamatsu, Go; Tsutsui, Yuji; Ohya, Nobuyoshi; Nakamura, Yasuhiko; Sasaki, Masayuki

    2014-01-01

    Objective(s): The aim of this study was to evaluate the influences of reconstruction and attenuation correction on the differences in the radioactivity distributions in 123I brain SPECT obtained by the hybrid SPECT/CT device. Methods: We used the 3-dimensional (3D) brain phantom, which imitates the precise structure of gray matter, white matter and bone regions. It was filled with 123I solution (20.1 kBq/mL) in the gray matter region and with K2HPO4 in the bone region. The SPECT/CT data were acquired by the hybrid SPECT/CT device. SPECT images were reconstructed by using filtered back projection with uniform attenuation correction (FBP-uAC), 3D ordered-subsets expectation-maximization with uniform AC (3D-OSEM-uAC) and 3D OSEM with CT-based non-uniform AC (3D-OSEM-CTAC). We evaluated the differences in the radioactivity distributions among these reconstruction methods using a 3D digital phantom, which was developed from CT images of the 3D brain phantom, as a reference. The normalized mean square error (NMSE) and regional radioactivity were calculated to evaluate the similarity of SPECT images to the 3D digital phantom. Results: The NMSE values were 0.0811 in FBP-uAC, 0.0914 in 3D-OSEM-uAC and 0.0766 in 3D-OSEM-CTAC. The regional radioactivity of FBP-uAC was 11.5% lower in the middle cerebral artery territory, and that of 3D-OSEM-uAC was 5.8% higher in the anterior cerebral artery territory, compared with the digital phantom. On the other hand, that of 3D-OSEM-CTAC was 1.8% lower in all brain areas. Conclusion: By using the hybrid SPECT/CT device, the brain SPECT reconstructed by 3D-OSEM with CT attenuation correction can provide an accurate assessment of the distribution of brain radioactivity.

  7. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction

    PubMed Central

    2013-01-01

    Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of

  8. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    NASA Astrophysics Data System (ADS)

    Tang, Qiulin; Zeng, Gengsheng L.; Gullberg, Grant T.

    2005-07-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections.

  9. Highly accurate stability-preserving optimization of the Zener viscoelastic model, with application to wave propagation in the presence of strong attenuation

    NASA Astrophysics Data System (ADS)

    Blanc, Émilie; Komatitsch, Dimitri; Chaljub, Emmanuel; Lombard, Bruno; Xie, Zhinan

    2016-04-01

    This paper concerns the numerical modelling of time-domain mechanical waves in viscoelastic media based on a generalized Zener model. To do so, classically in the literature relaxation mechanisms are introduced, resulting in a set of the so-called memory variables and thus in large computational arrays that need to be stored. A challenge is thus to accurately mimic a given attenuation law using a minimal set of relaxation mechanisms. For this purpose, we replace the classical linear approach of Emmerich & Korn with a nonlinear optimization approach with constraints of positivity. We show that this technique is more accurate than the linear approach. Moreover, it ensures that physically meaningful relaxation times that always honour the constraint of decay of total energy with time are obtained. As a result, these relaxation times can always be used in a stable way in a modelling algorithm, even in the case of very strong attenuation for which the classical linear approach may provide some negative and thus unusable coefficients.

  10. Attenuation-Corrected vs. Nonattenuation-Corrected 2-Deoxy-2-[F-18]fluoro-d-glucose-Positron Emission Tomography in Oncology, A Systematic Review

    PubMed Central

    Joshi, Urvi; Riphagen, Ingrid I.; Teule, Gerrit J. J.; van Lingen, Arthur; Hoekstra, Otto S.

    2007-01-01

    Purpose To perform a systematic review and meta-analysis to determine the diagnostic accuracy of attenuation-corrected (AC) vs. nonattenuation-corrected (NAC) 2-deoxy-2-[F-18]fluoro-d-glucose-positron emission tomography (FDG-PET) in oncological patients. Procedures Following a comprehensive search of the literature, two reviewers independently assessed the methodological quality of eligible studies. The diagnostic value of AC was studied through its sensitivity/specificity compared to histology, and by comparing the relative lesion detection rate reported with NAC-PET vs. AC, for full-ring and dual-head coincidence PET (FR- and DH-PET, respectively). Results Twelve studies were included. For FR-PET, the pooled sensitivity/specificity on a patient basis was 64/97% for AC and 62/99% for NAC, respectively. Pooled lesion detection with NAC vs. AC was 98% [95% confidence interval (95% CI): 96–99%, n = 1,012 lesions] for FR-PET, and 88% (95% CI:81–94%, n = 288 lesions) for DH-PET. Conclusions Findings suggest similar sensitivity/specificity and lesion detection for NAC vs. AC FR-PET and significantly higher lesion detection for NAC vs. AC DH-PET. PMID:17318671

  11. Comparison of attenuation correction methods for TGS and SGS: Do we really need selenium-75?

    SciTech Connect

    Estep, R.J.; Prettyman, T.H.; Sheppard, G.A.

    1996-09-01

    We compared attenuation-coefficient mapping techniques for use in tomographic gamma scanner (TGS) image reconstructions to determine whether there is a significant improvement when using fully coupled methods. For the constrained least-squares image reconstruction method tested here, we found no significant improvement. We also compared the effectiveness of different transmission source combinations for 129- and 414-keV {sup 239}Pu TGS assays. We concluded that the best source combination for TGS assays of {sup 239}Pu and other isotopes is a mixture of {sup 133}Ba, {sup 54}Mn, and {sup 60}Co. Three other source combinations were found to be at least as effective as {sup 75}Se.

  12. Evaluation of a bilinear model for attenuation correction using CT numbers generated from a parametric method.

    PubMed

    Martinez, L C; Calzado, A

    2016-01-01

    A parametric model is used for the calculation of the CT number of some selected human tissues of known compositions (Hi) in two hybrid systems, one SPECT-CT and one PET-CT. Only one well characterized substance, not necessarily tissue-like, needs to be scanned with the protocol of interest. The linear attenuation coefficients of these tissues for some energies of interest (μ(i)) have been calculated from their tabulated compositions and the NIST databases. These coefficients have been compared with those calculated with the bilinear model from the CT number (μ(B)i). No relevant differences have been found for bones and lung. In the soft tissue region, the differences can be up to 5%. These discrepancies are attributed to the different chemical composition for the tissues assumed by both methods. PMID:26454019

  13. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction.

    PubMed

    Chun, Se Young

    2016-03-01

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855

  14. Correcting attenuation effects caused by interactions in the forest canopy in full-waveform airborne laser scanner data

    NASA Astrophysics Data System (ADS)

    Richter, K.; Stelling, N.; Maas, H.-G.

    2014-08-01

    Full-waveform airborne laser scanning offers a great potential for various forestry applications. Especially applications requiring information on the vertical structure of the lower canopy parts benefit from the great amount of information contained in waveform data. To enable the derivation of vertical forest canopy structure, the development of suitable voxel based data analysis methods is straightforward. Beyond extracting additional 3D points, it is very promising to derive the voxel attributes from the digitized waveform directly. For this purpose, the differential backscatter cross sections have to be projected into a Cartesian voxel structure. Thereby the voxel entries represent amplitudes of the cross section and can be interpreted as a local measure for the amount of pulse reflecting matter. However, the "history" of each laser echo pulse is characterized by attenuation effects caused by reflections in higher regions of the crown. As a result, the received waveform signals within the canopy have a lower amplitude than it would be observed for an identical structure without the previous canopy structure interactions (Romanczyk et al., 2012). If the biophysical structure is determined from the raw waveform data, material in the lower parts of the canopy is thus under-represented. To achieve a radiometrically correct voxel space representation the loss of signal strength caused by partial reflections on the path of a laser pulse through the canopy has to be compensated. In this paper, we present an integral approach correcting the waveform at each recorded sample. The basic idea of the procedure is to enhance the waveform intensity values in lower parts of the canopy for portions of the pulse intensity, which have been reflected (and thus blocked) in higher parts of the canopy. The paper will discuss the developed correction method and show results from a validation both with synthetic and real world data.

  15. Evaluation of dosimetry and image of very low-dose computed tomography attenuation correction for pediatric positron emission tomography/computed tomography: phantom study

    NASA Astrophysics Data System (ADS)

    Bahn, Y. K.; Park, H. H.; Lee, C. H.; Kim, H. S.; Lyu, K. Y.; Dong, K. R.; Chung, W. K.; Cho, J. H.

    2014-04-01

    In this study, phantom was used to evaluate attenuation correction computed tomography (CT) dose and image in case of pediatric positron emission tomography (PET)/CT scan. Three PET/CT scanners were used along with acryl phantom in the size for infant and ion-chamber dosimeter. The CT image acquisition conditions were changed from 10 to 20, 40, 80, 100 and 160 mA and from 80 to 100, 120 and 140 kVp, which aimed at evaluating penetrate dose and computed tomography dose indexvolume (CTDIvol) value. And NEMA PET Phantom™ was used to obtain PET image under the same CT conditions in order to evaluate each attenuation-corrected PET image based on standard uptake value (SUV) value and signal-to-noise ratio (SNR). In general, the penetrate dose was reduced by around 92% under the minimum CT conditions (80 kVp and 10 mA) with the decrease in CTDIvol value by around 88%, compared with the pediatric abdomen CT conditions (100 kVp and 100 mA). The PET image with its attenuation corrected according to each CT condition showed no change in SUV value and no influence on the SNR. In conclusion, if the minimum dose CT that is properly applied to body of pediatric patient is corrected for attenuation to ensure that the effective dose is reduced by around 90% or more compared with that for adult patient, this will be useful to reduce radiation exposure level.

  16. SU-C-9A-06: The Impact of CT Image Used for Attenuation Correction in 4D-PET

    SciTech Connect

    Cui, Y; Bowsher, J; Yan, S; Cai, J; Das, S; Yin, F

    2014-06-01

    Purpose: To evaluate the appropriateness of using 3D non-gated CT image for attenuation correction (AC) in a 4D-PET (gated PET) imaging protocol used in radiotherapy treatment planning simulation. Methods: The 4D-PET imaging protocol in a Siemens PET/CT simulator (Biograph mCT, Siemens Medical Solutions, Hoffman Estates, IL) was evaluated. CIRS Dynamic Thorax Phantom (CIRS Inc., Norfolk, VA) with a moving glass sphere (8 mL) in the middle of its thorax portion was used in the experiments. The glass was filled with {sup 18}F-FDG and was in a longitudinal motion derived from a real patient breathing pattern. Varian RPM system (Varian Medical Systems, Palo Alto, CA) was used for respiratory gating. Both phase-gating and amplitude-gating methods were tested. The clinical imaging protocol was modified to use three different CT images for AC in 4D-PET reconstruction: first is to use a single-phase CT image to mimic actual clinical protocol (single-CT-PET); second is to use the average intensity projection CT (AveIP-CT) derived from 4D-CT scanning (AveIP-CT-PET); third is to use 4D-CT image to do the phase-matched AC (phase-matching- PET). Maximum SUV (SUVmax) and volume of the moving target (glass sphere) with threshold of 40% SUVmax were calculated for comparison between 4D-PET images derived with different AC methods. Results: The SUVmax varied 7.3%±6.9% over the breathing cycle in single-CT-PET, compared to 2.5%±2.8% in AveIP-CT-PET and 1.3%±1.2% in phasematching PET. The SUVmax in single-CT-PET differed by up to 15% from those in phase-matching-PET. The target volumes measured from single- CT-PET images also presented variations up to 10% among different phases of 4D PET in both phase-gating and amplitude-gating experiments. Conclusion: Attenuation correction using non-gated CT in 4D-PET imaging is not optimal process for quantitative analysis. Clinical 4D-PET imaging protocols should consider phase-matched 4D-CT image if available to achieve better accuracy.

  17. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules II: Non-Empirically Tuned Long-Range Corrected Hybrid Functionals.

    PubMed

    Gallandi, Lukas; Marom, Noa; Rinke, Patrick; Körzdörfer, Thomas

    2016-02-01

    The performance of non-empirically tuned long-range corrected hybrid functionals for the prediction of vertical ionization potentials (IPs) and electron affinities (EAs) is assessed for a set of 24 organic acceptor molecules. Basis set-extrapolated coupled cluster singles, doubles, and perturbative triples [CCSD(T)] calculations serve as a reference for this study. Compared to standard exchange-correlation functionals, tuned long-range corrected hybrid functionals produce highly reliable results for vertical IPs and EAs, yielding mean absolute errors on par with computationally more demanding GW calculations. In particular, it is demonstrated that long-range corrected hybrid functionals serve as ideal starting points for non-self-consistent GW calculations. PMID:26731340

  18. Correction.

    PubMed

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  19. Benchmark atomization energy of ethane : importance of accurate zero-point vibrational energies and diagonal Born-Oppenheimer corrections for a 'simple' organic molecule.

    SciTech Connect

    Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2007-06-01

    A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.

  20. Correction.

    PubMed

    2015-12-01

    In the article by Narayan et al (Narayan O, Davies JE, Hughes AD, Dart AM, Parker KH, Reid C, Cameron JD. Central aortic reservoir-wave analysis improves prediction of cardiovascular events in elderly hypertensives. Hypertension. 2015;65:629–635. doi: 10.1161/HYPERTENSIONAHA.114.04824), which published online ahead of print December 22, 2014, and appeared in the March 2015 issue of the journal, some corrections were needed.On page 632, Figure, panel A, the label PRI has been corrected to read RPI. In panel B, the text by the upward arrow, "10% increase in kd,” has been corrected to read, "10% decrease in kd." The corrected figure is shown below.The authors apologize for these errors. PMID:26558821

  1. Post-exposure sleep deprivation facilitates correctly timed interactions between glucocorticoid and adrenergic systems, which attenuate traumatic stress responses.

    PubMed

    Cohen, Shlomi; Kozlovsky, Nitsan; Matar, Michael A; Kaplan, Zeev; Zohar, Joseph; Cohen, Hagit

    2012-10-01

    compared with exposed-SD animals. Intentional prevention of sleep in the early aftermath of stress exposure may well be beneficial in attenuating traumatic stress-related sequelae. Post-exposure SD may disrupt the consolidation of aversive or fearful memories by facilitating correctly timed interactions between glucocorticoid and adrenergic systems. PMID:22713910

  2. Additional correction for energy transfer efficiency calculation in filter-based Förster resonance energy transfer microscopy for more accurate results

    NASA Astrophysics Data System (ADS)

    Sun, Yuansheng; Periasamy, Ammasi

    2010-03-01

    Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.

  3. Accurately evaluating Young's modulus of polymers through nanoindentations: A phenomenological correction factor to the Oliver and Pharr procedure

    NASA Astrophysics Data System (ADS)

    Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander

    2006-10-01

    The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.

  4. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  5. Evaluation of Iterative Reconstruction Method and Attenuation Correction in Brain Dopamine Transporter SPECT Using an Anthropomorphic Striatal Phantom

    PubMed Central

    Maebatake, Akira; Imamura, Ayaka; Kodera, Yui; Yamashita, Yasuo; Himuro, Kazuhiko; Baba, Shingo; Miwa, Kenta; Sasaki, Masayuki

    2016-01-01

    Objective(s): The aim of this study was to determine the optimal reconstruction parameters for iterative reconstruction in different devices and collimators for dopamine transporter (DaT) single-photon emission computed tomography (SPECT). The results were compared between filtered back projection (FBP) and different attenuation correction (AC) methods. Methods: An anthropomorphic striatal phantom was filled with 123I solutions at different striatum-to-background radioactivity ratios. Data were acquired using two SPECT/CT devices, equipped with a low-to-medium-energy general-purpose collimator (cameras A-1 and B-1) and a low-energy high-resolution (LEHR) collimator (cameras A-2 and B-2). The SPECT images were once reconstructed by FBP using Chang’s AC and once by ordered subset expectation maximization (OSEM) using both CTAC and Chang’s AC; moreover, scatter correction was performed. OSEM on cameras A-1 and A-2 included resolution recovery (RR). The images were analyzed, using the specific binding ratio (SBR). Regions of interest for the background were placed on both frontal and occipital regions. Results: The optimal number of iterations and subsets was 10i10s on camera A-1, 10i5s on camera A-2, and 7i6s on cameras B-1 and B-2. The optimal full width at half maximum of the Gaussian filter was 2.5 times the pixel size. In the comparison between FBP and OSEM, the quality was superior on OSEM-reconstructed images, although edge artifacts were observed in cameras A-1 and A-2. The SBR recovery of OSEM was higher than that of FBP on cameras A-1 and A-2, while no significant difference was detected on cameras B-1 and B-2. Good linearity of SBR was observed in all cameras. In the comparison between Chang’s AC and CTAC, a significant correlation was observed on all cameras. The difference in the background region influenced SBR differently in Chang’s AC and CTAC on cameras A-1 and B-1. Conclusion: Iterative reconstruction improved image quality on all cameras

  6. Correction

    NASA Astrophysics Data System (ADS)

    1995-04-01

    Seismic images of the Brooks Range, Arctic Alaska, reveal crustal-scale duplexing: Correction Geology, v. 23, p. 65 68 (January 1995) The correct Figure 4A, for the loose insert, is given here. See Figure 4A below. Corrected inserts will be available to those requesting copies of the article from the senior author, Gary S. Fuis, U.S. Geological Survey, 345 Middlefield Road, Menlo Park, CA 94025. Figure 4A. P-wave velocity model of Brooks Range region (thin gray contours) with migrated wide-angle reflections (heavy red lines) and migreated vertical-incidence reflections (short black lines) superimposed. Velocity contour interval is 0.25 km/s; 4,5, and 6 km/s contours are labeled. Estimated error in velocities is one contour interval. Symbols on faults shown at top are as in Figure 2 caption.

  7. Correction.

    PubMed

    2016-02-01

    Neogi T, Jansen TLTA, Dalbeth N, et al. 2015 Gout classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative. Ann Rheum Dis 2015;74:1789–98. The name of the 20th author was misspelled. The correct spelling is Janitzia Vazquez-Mellado. We regret the error. PMID:26881284

  8. Correction.

    PubMed

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012

  9. Optimization of attenuation correction for positron emission tomography studies of thorax and pelvis using count-based transmission scans.

    PubMed

    Boellaard, R; van Lingen, A; van Balen, S C M; Lammertsma, A A

    2004-02-21

    The quality of thorax and pelvis transmission scans and therefore of attenuation correction in PET depends on patient thickness and transmission rod source strength. The purpose of the present study was to assess the feasibility of using count-based transmission scans, thereby guaranteeing more consistent image quality and more precise quantification than with fixed transmission scan duration. First, the relation between noise equivalent counts (NEC) of 10 min calibration transmission scans and rod source activity was determined over a period of 1.5 years. Second, the relation between transmission scan counts and uniform phantom diameter was studied numerically, determining the relative contribution of counts from lines of response passing through the phantom as compared with the total number of counts. Finally, the relation between patient weight and transmission scan duration was determined for 35 patients, who were scanned at the level of thorax or pelvis. After installation of new rod sources, the NEC of transmission scans first increased slightly (5%) with decreasing rod source activity and after 3 months decreased with a rate of 2-3% per month. The numerical simulation showed that the number of transmission scan counts from lines of response passing through the phantom increased with phantom diameter up to 7 cm. For phantoms larger than 7 cm, the number of these counts decreased at approximately the same rate as the total number of transmission scan counts. Patient data confirmed that the total number of transmission scan counts decreased with increasing patient weight with about 0.5% kg(-1). It can be concluded that count-based transmission scans compensate for radioactive decay of the rod sources. With count-based transmission scans, rod sources can be used for up to 1.5 years at the cost of a 50% increased transmission scan duration. For phantoms with diameters of more than 7 cm and for patients scanned at the level of thorax or pelvis, use of count

  10. The importance of accurate repair of the orbicularis oris muscle in the correction of unilateral cleft lip.

    PubMed

    Park, C G; Ha, B

    1995-09-01

    Most of the attempts and efforts in cleft lip repair have been directed toward the skin incision. The importance of the orbicularis oris muscle repair has been emphasized in recent years. The well-designed skin incision with simple repair of the orbicularis oris muscle has produced a considerable improvement in the appearance of the upper lip; however, the repaired upper lip seems to change its shape abnormally in motion and has a tendency to be distorted with age if the orbicularis oris muscle is not repaired precisely and accurately. Following the dissection of the normal upper lip and unilateral cleft lip in cadavers, we could find two different components in the orbicularis oris muscle, a superficial and a deep component. One is a retractor and the other is a constrictor of the lip. They have antagonistic actions to each other during lip movement. We also can identify these two different components of the muscle in the cleft lip patient during operation. We thought inaccurate and mixed connection between these two different functional components could make the repaired lip distorted and unbalanced, which would get worse during growth. By identification and separate repair of the two different muscular components of the orbicularis oris muscle (i.e., repair of the superficial and deep components on the lateral side with the corresponding components on the medial side), better results in the dynamic and three-dimensional configuration of the upper lip can be achieved, and unfavorable distortion can be avoided as the patients grow.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7652051

  11. USE OF WATER RAMAN EMISSION TO CORRECT AIRBORNE LASER FLUOROSENSOR DATA FOR EFFECTS OF WATER OPTICAL ATTENUATION

    EPA Science Inventory

    Airborne laser fluorosensor measurements of fluorophore concentrations in surface waters are highly sensitive to interference from changes in optical attenuation. This interference can be eliminated by normalizing the fluorescence signal with the concurrent water Raman signal. In...

  12. Enhancing the quality of radiographic images acquired with point-like gamma-ray sources through correction of the beam divergence and attenuation

    SciTech Connect

    Silvani, M. I.; Almeida, G. L.; Lopes, R. T.

    2014-11-11

    Radiographic images acquired with point-like gamma-ray sources exhibit a desirable low penumbra effects specially when positioned far away from the set object-detector. Such an arrangement frequently is not affordable due to the limited flux provided by a distant source. A closer source, however, has two main drawbacks, namely the degradation of the spatial resolution - as actual sources are only approximately punctual - and the non-homogeneity of the beam hitting the detector, which creates a false attenuation map of the object being inspected. This non-homogeneity is caused by the beam divergence itself and by the different thicknesses traversed the beam even if the object were an homogeneous flat plate. In this work, radiographic images of objects with different geometries, such as flat plates and pipes have undergone a correction of beam divergence and attenuation addressing the experimental verification of the capability and soundness of an algorithm formerly developed to generate and process synthetic images. The impact of other parameters, including source-detector gap, attenuation coefficient, ratio defective-to-main hull thickness and counting statistics have been assessed for specifically tailored test-objects aiming at the evaluation of the ability of the proposed method to deal with different boundary conditions. All experiments have been carried out with an X-ray sensitive Imaging Plate and reactor-produced {sup 198}Au and {sup 165}Dy sources. The results have been compared with other technique showing a better capability to correct the attenuation map of inspected objects unveiling their inner structure otherwise concealed by the poor contrast caused by the beam divergence and attenuation, in particular for those regions far apart from the vertical of the source.

  13. Correction.

    PubMed

    2015-05-22

    The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426

  14. Correction

    NASA Astrophysics Data System (ADS)

    1998-12-01

    Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.

  15. [The Optimal Reconstruction Parameters by Scatter and Attenuation Corrections Using Multi-focus Collimator System in Thallium-201 Myocardial Perfusion SPECT Study].

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Funayama, Risa; Nakajima, Kenichi; Matsuo, Shinro; Yoneyama, Hiroto; Konishi, Takahiro; Kinuya, Seigo

    2015-11-01

    The aim of this study was to reveal the optimal reconstruction parameters of ordered subset conjugates gradient minimizer (OSCGM) by no correction (NC), attenuation correction (AC), and AC+scatter correction (ACSC) using IQ-single photon emission computed tomography (SPECT) system in thallium-201 myocardial perfusion SPECT. Myocardial phantom acquired two patterns, with or without defect. Myocardial images were performed 5-point scale visual score and quantitative evaluations using contrast, uptake, and uniformity about the subset and update (subset×iteration) of OSCGM and the full width at half maximum (FWHM) of Gaussian filter by three corrections. We decided on optimal reconstruction parameters of OSCGM by three corrections. The number of subsets to create suitable images were 3 or 5 for NC and AC, 2 or 3 for ACSC. The updates to create suitable images were 30 or 40 for NC, 40 or 60 for AC, and 30 for ACSC. Furthermore, the FWHM of Gaussian filters were 9.6 mm or 12 mm for NC and ACSC, 7.2 mm or 9.6 mm for AC. In conclusion, the following optimal reconstruction parameters of OSCGM were decided; NC: subset 5, iteration 8 and FWHM 9.6 mm, AC: subset 5, iteration 8 and FWHM 7.2 mm, ACSC: subset 3, iteration 10 and FWHM 9.6 mm. PMID:26596202

  16. Exact fan-beam and 4{pi}-acquisition cone-beam SPECT algorithms with uniform attenuation correction

    SciTech Connect

    Tang Qiulin; Zeng, Gengsheng L.; Wu Jiansheng; Gullberg, Grant T.

    2005-11-15

    This paper presents analytical fan-beam and cone-beam reconstruction algorithms that compensate for uniform attenuation in single photon emission computed tomography. First, a fan-beam algorithm is developed by obtaining a relationship between the two-dimensional (2D) Fourier transform of parallel-beam projections and fan-beam projections. Using this relationship, 2D Fourier transforms of equivalent parallel-beam projection data are obtained from the fan-beam projection data. Then a quasioptimal analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan, is used to reconstruct the image. A cone-beam algorithm is developed by extending the fan-beam algorithm to 4{pi} solid angle geometry. The cone-beam algorithm is also an exact algorithm.

  17. Correction of quantification errors in pelvic and spinal lesions caused by ignoring higher photon attenuation of bone in [{sup 18}F]NaF PET/MR

    SciTech Connect

    Schramm, Georg Maus, Jens; Hofheinz, Frank; Petr, Jan; Lougovski, Alexandr; Beuthien-Baumann, Bettina; Oehme, Liane; Platzek, Ivan; Hoff, Jörg van den

    2015-11-15

    Purpose: MR-based attenuation correction (MRAC) in routine clinical whole-body positron emission tomography and magnetic resonance imaging (PET/MRI) is based on tissue type segmentation. Due to lack of MR signal in cortical bone and the varying signal of spongeous bone, standard whole-body segmentation-based MRAC ignores the higher attenuation of bone compared to the one of soft tissue (MRAC{sub nobone}). The authors aim to quantify and reduce the bias introduced by MRAC{sub nobone} in the standard uptake value (SUV) of spinal and pelvic lesions in 20 PET/MRI examinations with [{sup 18}F]NaF. Methods: The authors reconstructed 20 PET/MR [{sup 18}F]NaF patient data sets acquired with a Philips Ingenuity TF PET/MRI. The PET raw data were reconstructed with two different attenuation images. First, the authors used the vendor-provided MRAC algorithm that ignores the higher attenuation of bone to reconstruct PET{sub nobone}. Second, the authors used a threshold-based algorithm developed in their group to automatically segment bone structures in the [{sup 18}F]NaF PET images. Subsequently, an attenuation coefficient of 0.11 cm{sup −1} was assigned to the segmented bone regions in the MRI-based attenuation image (MRAC{sub bone}) which was used to reconstruct PET{sub bone}. The automatic bone segmentation algorithm was validated in six PET/CT [{sup 18}F]NaF examinations. Relative SUV{sub mean} and SUV{sub max} differences between PET{sub bone} and PET{sub nobone} of 8 pelvic and 41 spinal lesions, and of other regions such as lung, liver, and bladder, were calculated. By varying the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}, the authors investigated its influence on the reconstructed SUVs of the lesions. Results: The comparison of [{sup 18}F]NaF-based and CT-based bone segmentation in the six PET/CT patients showed a Dice similarity of 0.7 with a true positive rate of 0.72 and a false discovery rate of 0.33. The [{sup 18}F]NaF-based bone

  18. Computed tomography calcium score scan for attenuation correction of N-13 ammonia cardiac positron emission tomography: effect of respiratory phase and registration method.

    PubMed

    Zaidi, Habib; Nkoulou, Rene; Bond, Sarah; Baskin, Aylin; Schindler, Thomas; Ratib, Osman; Declerck, Jerome

    2013-08-01

    The use of coronary calcium scoring (CaScCT) for attenuation correction (AC) of (13)N-ammonia PET/CT studies (NH3) is still being debated. We compare standard ACCT to CaScCT using various respiratory phases and co-registration methods for AC. Forty-one patients underwent a stress/rest NH3. Standard ACCT scans and CaScCT acquired during inspiration (CaScCTinsp, 26 patients) or expiration (CaScCTexp, 15 patients) were used to correct PET data for photon attenuation. Resulting images were compared using Pearson's correlation and Bland-Altman (BA) limits of agreement (LA) on segmental relative and absolute coronary blood flow (CBF) using both manual and automatic co-registration methods (rigid-body and deformable). For relative perfusion, CaScCTexp correlates better than CaScCTinsp with ACCT when using manual co-registration (r = 0.870; P < 0.001 and r = 0.732; P < 0.001, respectively). Automatic co-registration provides the best correlation between CaScCTexp and ACCT for relative perfusion (r = 0.956; P < 0.001). Both CaScCTinsp and CaScCTexp yielded excellent correlations with ACCT for CBF when using manual co-registration (r = 0.918; P < 0.001; BA mean bias 0.05 ml/min/g; LA: -0.42 to +0.3 ml/min/g and r = 0.97; P < 0.001; BA mean bias 0.1 ml/min/g; LA: -0.65 to +0.5 ml/min/g, respectively). The use of CaScCTexp and deformable co-registration is best suited for AC to quantify relative perfusion and CBF enabling substantial radiation dose reduction. PMID:23504215

  19. Respiration-Averaged CT for Attenuation Correction of PET Images – Impact on PET Texture Features in Non-Small Cell Lung Cancer Patients

    PubMed Central

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li

    2016-01-01

    Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211

  20. Statistical analysis of accurate prediction of local atmospheric optical attenuation with a new model according to weather together with beam wandering compensation system: a season-wise experimental investigation

    NASA Astrophysics Data System (ADS)

    Arockia Bazil Raj, A.; Padmavathi, S.

    2016-07-01

    Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.

  1. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    SciTech Connect

    Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu

    2014-10-15

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in

  2. CT-based attenuation correction in the calculation of semi-quantitative indices of [18F]FDG uptake in PET.

    PubMed

    Visvikis, D; Costa, D C; Croasdale, I; Lonn, A H R; Bomanji, J; Gacinovic, S; Ell, P J

    2003-03-01

    The introduction of combined PET/CT systems has a number of advantages, including the utilisation of CT images for PET attenuation correction (AC). The potential advantage compared with existing methodology is less noisy transmission maps within shorter times of acquisition. The objective of our investigation was to assess the accuracy of CT attenuation correction (CTAC) and to study resulting bias and signal to noise ratio (SNR) in image-derived semi-quantitative uptake indices. A combined PET/CT system (GE Discovery LS) was used. Different size phantoms containing variable density components were used to assess the inherent accuracy of a bilinear transformation in the conversion of CT images to 511 keV attenuation maps. This was followed by a phantom study simulating tumour imaging conditions, with a tumour to background ratio of 5:1. An additional variable was the inclusion of contrast agent at different concentration levels. A CT scan was carried out followed by 5 min emission with 1-h and 3-min transmission frames. Clinical data were acquired in 50 patients, who had a CT scan under normal breathing conditions (CTAC(nb)) or under breath-hold with inspiration (CTAC(insp)) or expiration (CTAC(exp)), followed by a PET scan of 5 and 3 min per bed position for the emission and transmission scans respectively. Phantom and patient studies were reconstructed using segmented AC (SAC) and CTAC. In addition, measured AC (MAC) was performed for the phantom study using the 1-h transmission frame. Comparing the attenuation coefficients obtained using the CT- and the rod source-based attenuation maps, differences of 3% and <6% were recorded before and after segmentation of the measured transmission maps. Differences of up to 6% and 8% were found in the average count density (SUV(avg)) between the phantom images reconstructed with MAC and those reconstructed with CTAC and SAC respectively. In the case of CTAC, the difference increased up to 27% with the presence of contrast

  3. Measurement of attenuation coefficients of the fundamental and second harmonic waves in water

    NASA Astrophysics Data System (ADS)

    Zhang, Shuzeng; Jeong, Hyunjo; Cho, Sungjong; Li, Xiongbing

    2016-02-01

    Attenuation corrections in nonlinear acoustics play an important role in the study of nonlinear fluids, biomedical imaging, or solid material characterization. The measurement of attenuation coefficients in a nonlinear regime is not easy because they depend on the source pressure and requires accurate diffraction corrections. In this work, the attenuation coefficients of the fundamental and second harmonic waves which come from the absorption of water are measured in nonlinear ultrasonic experiments. Based on the quasilinear theory of the KZK equation, the nonlinear sound field equations are derived and the diffraction correction terms are extracted. The measured sound pressure amplitudes are adjusted first for diffraction corrections in order to reduce the impact on the measurement of attenuation coefficients from diffractions. The attenuation coefficients of the fundamental and second harmonics are calculated precisely from a nonlinear least squares curve-fitting process of the experiment data. The results show that attenuation coefficients in a nonlinear condition depend on both frequency and source pressure, which are much different from a linear regime. In a relatively lower drive pressure, the attenuation coefficients increase linearly with frequency. However, they present the characteristic of nonlinear growth in a high drive pressure. As the diffraction corrections are obtained based on the quasilinear theory, it is important to use an appropriate source pressure for accurate attenuation measurements.

  4. Evaluation of imaging technologies to correct for photon attenuation in the overlying tissue for in vivo bone strontium measurements

    NASA Astrophysics Data System (ADS)

    Heirwegh, C. M.; Chettle, D. R.; Pejović-Milić, A.

    2010-02-01

    The interpretation of measurements of bone strontium in vivo using energy dispersive x-ray fluorescence spectroscopy is presently hindered by overlying skin and soft-tissue absorption of the strontium x-rays. The use of imaging technologies to measure the overlying soft-tissue thickness at the index finger measuring site might allow correction of the strontium reading to estimate its concentration in bone. An examination of magnetic resonance (MR), computed tomography (CT) and high-frequency ultrasound (US) imaging technologies revealed that 55 MHz US had the smallest range of measurement uncertainty at 3.2% followed by 1 Tesla MR, 25 MHz US, 8 MHz US and CT at 4.3, 5.4, 6.6 and 7.1% uncertainty, respectively. Of these, only CT imaging appeared to underestimate total thickness (p < 0.05). Furthermore, an inter-study comparison on the accuracy of US measurements of the overlying tissue thickness at finger and ankle in nine subjects was investigated. The 8 MHz US system used in prior in vivo experiments was found to perform satisfactorily in a repeat study of ankle measurements, but indicated that finger thickness measurements may have been misread in previous studies by up to 17.7% (p < 0.025). Repeat ankle measurements were not significantly different from initial measurements at 2.2% difference.

  5. Dependence of Yb-169 absorbed dose energy correction factors on self-attenuation in source material and photon buildup in water

    SciTech Connect

    Medich, David C.; Munro, John J. III

    2010-05-15

    depths of 1 and 10 cm and angles of 0 deg. and 180 deg. This was in contrast to that of the Model M-19 Ir-192 source which exhibited approximately 3.5%-4.4% variation in its energy correction factors from phantom depths of 0.5-10 cm. The absorbed dose energy correction factor for the Ir-192 source, on the other hand, was independent of angle to within 1%. Conclusions: The application of a single energy correction factor for Yb-169 TLD based dosimetry would introduce a high degree of measurement uncertainty that may not be reasonable for the clinical characterization of a brachytherapy source; rather, an absorbed dose energy correction function will need to be developed for these sources. This correction function should be specific to each source model, type of TLD used, and to the experimental setup to obtain accurate and precise dosimetric measurements.

  6. ALPHA ATTENUATION DUE TO DUST LOADING

    SciTech Connect

    Dailey, A; Dennis Hadlock, D

    2007-08-09

    Previous studies had been done in order to show the attenuation of alpha particles in filter media. These studies provided an accurate correction for this attenuation, but there had not yet been a study with sufficient results to properly correct for attenuation due to dust loading on the filters. At the Savannah River Site, filter samples are corrected for attenuation due to dust loading at 20%. Depending on the facility the filter comes from and the duration of the sampling period, the proper correction factor may vary. The objective of this study was to determine self-absorption curves for each of three counting instruments. Prior work indicated significant decreases in alpha count rate (as much as 38%) due to dust loading, especially on filters from facilities where sampling takes place over long intervals. The alpha count rate decreased because of a decrease in the energy of the alpha. The study performed resulted in a set of alpha absorption curves for each of three detectors. This study also took into account the affects of the geometry differences in the different counting equipment used.

  7. A generalized method of converting CT image to PET linear attenuation coefficient distribution in PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Wu, Li-Wei; Wei, Le; Gao, Juan; Sun, Cui-Li; Chai, Pei; Li, Dao-Wu

    2014-02-01

    The accuracy of attenuation correction in positron emission tomography scanners depends mainly on deriving the reliable 511-keV linear attenuation coefficient distribution in the scanned objects. In the PET/CT system, the linear attenuation distribution is usually obtained from the intensities of the CT image. However, the intensities of the CT image relate to the attenuation of photons in an energy range of 40 keV-140 keV. Before implementing PET attenuation correction, the intensities of CT images must be transformed into the PET 511-keV linear attenuation coefficients. However, the CT scan parameters can affect the effective energy of CT X-ray photons and thus affect the intensities of the CT image. Therefore, for PET/CT attenuation correction, it is crucial to determine the conversion curve with a given set of CT scan parameters and convert the CT image into a PET linear attenuation coefficient distribution. A generalized method is proposed for converting a CT image into a PET linear attenuation coefficient distribution. Instead of some parameter-dependent phantom calibration experiments, the conversion curve is calculated directly by employing the consistency conditions to yield the most consistent attenuation map with the measured PET data. The method is evaluated with phantom experiments and small animal experiments. In phantom studies, the estimated conversion curve fits the true attenuation coefficients accurately, and accurate PET attenuation maps are obtained by the estimated conversion curves and provide nearly the same correction results as the true attenuation map. In small animal studies, a more complicated attenuation distribution of the mouse is obtained successfully to remove the attenuation artifact and improve the PET image contrast efficiently.

  8. MO-G-17A-03: MR-Based Cortical Bone Segmentation for PET Attenuation Correction with a Non-UTE 3D Fast GRE Sequence

    SciTech Connect

    Ai, H; Pan, T; Hwang, K

    2014-06-15

    Purpose: To determine the feasibility of identifying cortical bone on MR images with a short-TE 3D fast-GRE sequence for attenuation correction of PET data in PET/MR. Methods: A water-fat-bone phantom was constructed with two pieces of beef shank. MR scans were performed on a 3T MR scanner (GE Discovery™ MR750). A 3D GRE sequence was first employed to measure the level of residual signal in cortical bone (TE{sub 1}/TE{sub 2}/TE{sub 3}=2.2/4.4/6.6ms, TR=20ms, flip angle=25°). For cortical bone segmentation, a 3D fast-GRE sequence (TE/TR=0.7/1.9ms, acquisition voxel size=2.5×2.5×3mm{sup 3}) was implemented along with a 3D Dixon sequence (TE{sub 1}/TE{sub 2}/TR=1.2/2.3/4.0ms, acquisition voxel size=1.25×1.25×3mm{sup 3}) for water/fat imaging. Flip angle (10°), acquisition bandwidth (250kHz), FOV (480×480×144mm{sup 3}) and reconstructed voxel size (0.94×0.94×1.5mm{sup 3}) were kept the same for both sequences. Soft tissue and fat tissue were first segmented on the reconstructed water/fat image. A tissue mask was created by combining the segmented water/fat masks, which was then applied on the fast-GRE image (MRFGRE). A second mask was created to remove the Gibbs artifacts present in regions in close vicinity to the phantom. MRFGRE data was smoothed with a 3D anisotropic diffusion filter for noise reduction, after which cortical bone and air was separated using a threshold determined from the histogram. Results: There is signal in the cortical bone region in the 3D GRE images, indicating the possibility of separating cortical bone and air based on signal intensity from short-TE MR image. The acquisition time for the 3D fast-GRE sequence was 17s, which can be reduced to less than 10s with parallel imaging. The attenuation image created from water-fat-bone segmentation is visually similar compared to reference CT. Conclusion: Cortical bone and air can be separated based on intensity in MR image with a short-TE 3D fast-GRE sequence. Further research is required

  9. Comparison of effective dose and lifetime risk of cancer incidence of CT attenuation correction acquisitions and radiopharmaceutical administration for myocardial perfusion imaging

    PubMed Central

    Szczepura, K; Hogg, P

    2014-01-01

    Objective: To measure the organ dose and calculate effective dose from CT attenuation correction (CTAC) acquisitions from four commonly used gamma camera single photon emission CT/CT systems. Methods: CTAC dosimetry data was collected using thermoluminescent dosemeters on GE Healthcare's Infinia™ Hawkeye™ (GE Healthcare, Buckinghamshire, UK) four- and single-slice systems, Siemens Symbia™ T6 (Siemens Healthcare, Erlangen, Germany) and the Philips Precedence (Philips Healthcare, Amsterdam, Netherlands). Organ and effective dose from the administration of 99mTc-tetrofosmin and 99mTc-sestamibi were calculated using International Commission of Radiological Protection reports 80 and 106. Using these data, the lifetime biological risk was calculated. Results: The Siemens Symbia gave the lowest CTAC dose (1.8 mSv) followed by the GE Infinia Hawkeye single-slice (1.9 mSv), GE Infinia Hawkeye four-slice (2.5 mSv) and Philips Precedence v. 3.0. Doses were significantly lower than the calculated doses from radiopharmaceutical administration (11 and 14 mSv for 99mTc-tetrofosmin and 99mTc-sestamibi, respectively). Overall lifetime biological risks were lower, which suggests that using CTAC data posed minimal risk to the patient. Comparison of data for breast tissue demonstrated a higher risk than that from the radiopharmaceutical administration. Conclusion: CTAC doses were confirmed to be much lower than those from radiopharmaceutical administration. The localized nature of the CTAC exposure compared to the radiopharmaceutical biological distribution indicated dose and risk to the breast to be higher. Advances in knowledge: This research proved that CTAC is a comparatively low-dose acquisition. However, it has been shown that there is increased risk for breast tissue especially in the younger patients. As per legislation, justification is required and CTAC should only be used in situations that demonstrate sufficient net benefit. PMID:24998249

  10. Accurate evaluations of the field shift and lowest-order QED correction for the ground 1{sup 1}S−states of some light two-electron ions

    SciTech Connect

    Frolov, Alexei M.; Wardlaw, David M.

    2014-09-14

    Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.

  11. SU-E-I-86: Ultra-Low Dose Computed Tomography Attenuation Correction for Pediatric PET CT Using Adaptive Statistical Iterative Reconstruction (ASiR™)

    SciTech Connect

    Brady, S; Shulkin, B

    2015-06-15

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co

  12. Whole-Body PET/MR Imaging: Quantitative Evaluation of a Novel Model-Based MR Attenuation Correction Method Including Bone

    PubMed Central

    Paulus, Daniel H.; Quick, Harald H.; Geppert, Christian; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Faul, David; Boada, Fernando; Friedman, Kent P.; Koesters, Thomas

    2016-01-01

    In routine whole-body PET/MR hybrid imaging, attenuation correction (AC) is usually performed by segmentation methods based on a Dixon MR sequence providing up to 4 different tissue classes. Because of the lack of bone information with the Dixon-based MR sequence, bone is currently considered as soft tissue. Thus, the aim of this study was to evaluate a novel model-based AC method that considers bone in whole-body PET/MR imaging. Methods The new method (“Model”) is based on a regular 4-compartment segmentation from a Dixon sequence (“Dixon”). Bone information is added using a model-based bone segmentation algorithm, which includes a set of prealigned MR image and bone mask pairs for each major body bone individually. Model was quantitatively evaluated on 20 patients who underwent whole-body PET/MR imaging. As a standard of reference, CT-based μ-maps were generated for each patient individually by nonrigid registration to the MR images based on PET/CT data. This step allowed for a quantitative comparison of all μ-maps based on a single PET emission raw dataset of the PET/MR system. Volumes of interest were drawn on normal tissue, soft-tissue lesions, and bone lesions; standardized uptake values were quantitatively compared. Results In soft-tissue regions with background uptake, the average bias of SUVs in background volumes of interest was 2.4% ± 2.5% and 2.7% ± 2.7% for Dixon and Model, respectively, compared with CT-based AC. For bony tissue, the −25.5% ± 7.9% underestimation observed with Dixon was reduced to −4.9% ± 6.7% with Model. In bone lesions, the average underestimation was −7.4% ± 5.3% and −2.9% ± 5.8% for Dixon and Model, respectively. For soft-tissue lesions, the biases were 5.1% ± 5.1% for Dixon and 5.2% ± 5.2% for Model. Conclusion The novel MR-based AC method for whole-body PET/MR imaging, combining Dixon-based soft-tissue segmentation and model-based bone estimation, improves PET quantification in whole-body hybrid PET

  13. CT-Based Attenuation Correction in Brain SPECT/CT Can Improve the Lesion Detectability of Voxel-Based Statistical Analyses

    PubMed Central

    Kato, Hiroki; Shimosegawa, Eku; Fujino, Koichi; Hatazawa, Jun

    2016-01-01

    Background Integrated SPECT/CT enables non-uniform attenuation correction (AC) using built-in CT instead of the conventional uniform AC. The effect of CT-based AC on voxel-based statistical analyses of brain SPECT findings has not yet been clarified. Here, we assessed differences in the detectability of regional cerebral blood flow (CBF) reduction using SPECT voxel-based statistical analyses based on the two types of AC methods. Subjects and Methods N-isopropyl-p-[123I]iodoamphetamine (IMP) CBF SPECT images were acquired for all the subjects and were reconstructed using 3D-OSEM with two different AC methods: Chang’s method (Chang’s AC) and the CT-based AC method. A normal database was constructed for the analysis using SPECT findings obtained for 25 healthy normal volunteers. Voxel-based Z-statistics were also calculated for SPECT findings obtained for 15 patients with chronic cerebral infarctions and 10 normal subjects. We assumed that an analysis with a higher specificity would likely produce a lower mean absolute Z-score for normal brain tissue, and a more sensitive voxel-based statistical analysis would likely produce a higher absolute Z-score for in old infarct lesions, where the CBF was severely decreased. Results The inter-subject variation in the voxel values in the normal database was lower using CT-based AC, compared with Chang’s AC, for most of the brain regions. The absolute Z-score indicating a SPECT count reduction in infarct lesions was also significantly higher in the images reconstructed using CT-based AC, compared with Chang’s AC (P = 0.003). The mean absolute value of the Z-score in the 10 intact brains was significantly lower in the images reconstructed using CT-based AC than in those reconstructed using Chang’s AC (P = 0.005). Conclusions Non-uniform CT-based AC by integrated SPECT/CT significantly improved sensitivity and the specificity of the voxel-based statistical analyses for regional SPECT count reductions, compared with

  14. Band-structure calculations of noble-gas and alkali halide solids using accurate Kohn-Sham potentials with self-interaction correction

    SciTech Connect

    Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )

    1991-11-15

    The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.

  15. Radiometric correction of scatterometric wind measurements

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Use of a spaceborne scatterometer to determine the ocean-surface wind vector requires accurate measurement of radar backscatter from ocean. Such measurements are hindered by the effect of attenuation in the precipitating regions over sea. The attenuation can be estimated reasonably well with the knowledge of brightness temperatures observed by a microwave radiometer. The NASA SeaWinds scatterometer is to be flown on the Japanese ADEOS2. The AMSR multi-frequency radiometer on ADEOS2 will be used to correct errors due to attenuation in the SeaWinds scatterometer measurements. Here we investigate the errors in the attenuation corrections. Errors would be quite small if the radiometer and scatterometer footprints were identical and filled with uniform rain. However, the footprints are not identical, and because of their size one cannot expect uniform rain across each cell. Simulations were performed with the SeaWinds scatterometer (13.4 GHz) and AMSR (18.7 GHz) footprints with gradients of attenuation. The study shows that the resulting wind speed errors after correction (using the radiometer) are small for most cases. However, variations in the degree of overlap between the radiometer and scatterometer footprints affect the accuracy of the wind speed measurements.

  16. Transmission-less attenuation estimation from time-of-flight PET histo-images using consistency equations

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Defrise, Michel; Metzler, Scott D.; Matej, Samuel

    2015-08-01

    In positron emission tomography (PET) imaging, attenuation correction with accurate attenuation estimation is crucial for quantitative patient studies. Recent research showed that the attenuation sinogram can be determined up to a scaling constant utilizing the time-of-flight information. The TOF-PET data can be naturally and efficiently stored in a histo-image without information loss, and the radioactive tracer distribution can be efficiently reconstructed using the DIRECT approaches. In this paper, we explore transmission-less attenuation estimation from TOF-PET histo-images. We first present the TOF-PET histo-image formation and the consistency equations in the histo-image parameterization, then we derive a least-squares solution for estimating the directional derivatives of the attenuation factors from the measured emission histo-images. Finally, we present a fast solver to estimate the attenuation factors from their directional derivatives using the discrete sine transform and fast Fourier transform while considering the boundary conditions. We find that the attenuation histo-images can be uniquely determined from the TOF-PET histo-images by considering boundary conditions. Since the estimate of the attenuation directional derivatives can be inaccurate for LORs tangent to the patient boundary, external sources, e.g. a ring or annulus source, might be needed to give an accurate estimate of the attenuation gradient for such LORs. The attenuation estimation from TOF-PET emission histo-images is demonstrated using simulated 2D TOF-PET data.

  17. Whole-body FDG PET-MR oncologic imaging: pitfalls in clinical interpretation related to inaccurate MR-based attenuation correction.

    PubMed

    Attenberger, Ulrike; Catana, Ciprian; Chandarana, Hersh; Catalano, Onofrio A; Friedman, Kent; Schonberg, Stefan A; Thrall, James; Salvatore, Marco; Rosen, Bruce R; Guimaraes, Alexander R

    2015-08-01

    Simultaneous data collection for positron emission tomography and magnetic resonance imaging (PET/MR) is now a reality. While the full benefits of concurrently acquiring PET and MR data and the potential added clinical value are still being evaluated, initial studies have identified several important potential pitfalls in the interpretation of fluorodeoxyglucose (FDG) PET/MRI in oncologic whole-body imaging, the majority of which being related to the errors in the attenuation maps created from the MR data. The purpose of this article was to present such pitfalls and artifacts using case examples, describe their etiology, and discuss strategies to overcome them. Using a case-based approach, we will illustrate artifacts related to (1) Inaccurate bone tissue segmentation; (2) Inaccurate air cavities segmentation; (3) Motion-induced misregistration; (4) RF coils in the PET field of view; (5) B0 field inhomogeneity; (6) B1 field inhomogeneity; (7) Metallic implants; (8) MR contrast agents. PMID:26025348

  18. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    PubMed

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells. PMID:26917148

  19. Toward accurate thermochemistry of the {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH molecules at elevated temperatures: Corrections due to unbound states

    SciTech Connect

    Szidarovszky, Tamás; Császár, Attila G.

    2015-01-07

    The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  20. Use of Ga for mass bias correction for the accurate determination of copper isotope ratio in the NIST SRM 3114 Cu standard and geological samples by MC-ICP MS

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Zhou, L.; Tong, S.

    2015-12-01

    The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.

  1. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  2. Modeling transmission and scatter for photon beam attenuators.

    PubMed

    Ahnesjö, A; Weber, L; Nilsson, P

    1995-11-01

    The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop models for calculation of scatter and transmission for photon beam attenuators such as compensating filters, wedges, and block trays. The attenuation of the beam is calculated using a spectrum of the beam, and a correction factor based on attenuation measurements. Small angle coherent scatter and electron binding effects on scattering cross sections are considered by use of a correction factor. Quality changes in beam penetrability and energy fluence to dose conversion are modeled by use of the calculated primary beam spectrum after passage through the attenuator. The beam spectra are derived by the depth dose effective method, i.e., by minimizing the difference between measured and calculated depth dose distributions, where the calculated distributions are derived by superposing data from a database for monoenergetic photons. The attenuator scatter is integrated over the area viewed from the calculation point of view using first scatter theory. Calculations are simplified by replacing the energy and angular-dependent cross-section formulas with the forward scatter constant r2(0) and a set of parametrized correction functions. The set of corrections include functions for the Compton energy loss, scatter attenuation, and secondary bremsstrahlung production. The effect of charged particle contamination is bypassed by avoiding use of dmax for absolute dose calibrations. The results of the model are compared with scatter measurements in air for copper and lead filters and with dose to a water phantom for lead filters for 4 and 18 MV. For attenuated beams, downstream of the buildup region, the calculated results agree with measurements on the 1.5% level. The accuracy was slightly less in situations

  3. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  4. An analytical approach to quantitative reconstruction of non-uniform attenuated brain SPECT.

    PubMed

    Liang, Z; Ye, J; Harrington, D P

    1994-11-01

    An analytical approach to quantitative brain SPECT (single-photon-emission computed tomography) with non-uniform attenuation is developed. The approach formulates accurately the projection-transform equation as a summation of primary- and scatter-photon contributions. The scatter contribution can be estimated using the multiple-energy-window samples and removed from the primary-energy-window data by subtraction. The approach models the primary contribution as a convolution of the attenuated source and the detector-response kernel at a constant depth from the detector with the central-ray approximation. The attenuated Radon transform of the source can be efficiently deconvolved using the depth-frequency relation. The approach inverts exactly the attenuated Radon transform by Fourier transforms and series expansions. The performance of the analytical approach was studied for both uniform- and non-uniform-attenuation cases, and compared to the conventional FBP (filtered-backprojection) method by computer simulations. A patient brain X-ray image was acquired by a CT (computed-tomography) scanner and converted to the object-specific attenuation map for 140 keV energy. The mathematical Hoffman brain phantom was used to simulate the emission source and was resized such that it was completely surrounded by the skull of the CT attenuation map. The detector-response kernel was obtained from measurements of a point source at several depths in air from a parallel-hole collimator of a SPECT camera. The projection data were simulated from the object-specific attenuating source including the depth-dependent detector response. Quantitative improvement (>5%) in reconstructing the data was demonstrated with the nonuniform attenuation compensation, as compared to the uniform attenuation correction and the conventional FBP reconstruction. The commuting time was less than 5 min on an HP/730 desktop computer for an image array of 1282*32 from 128 projections of 128*32 size. PMID

  5. General relationships between ultrasonic attenuation and dispersion

    NASA Technical Reports Server (NTRS)

    Odonnell, M.; Jaynes, E. T.; Miller, J. G.

    1978-01-01

    General relationships between the ultrasonic attenuation and dispersion are presented. The validity of these nonlocal relationships hinges only on the properties of causality and linearity, and does not depend upon details of the mechanism responsible for the attenuation and dispersion. Approximate, nearly local relationships are presented and are demonstrated to predict accurately the ultrasonic dispersion in solutions of hemoglobin from the results of attenuation measurements.

  6. Evaluation of QNI corrections in porous media applications

    NASA Astrophysics Data System (ADS)

    Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.

    2011-09-01

    Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.

  7. Improved Background Corrections for Uranium Holdup Measurements

    SciTech Connect

    Oberer, R.B.; Gunn, C.A.; Chiang, L.G.

    2004-06-21

    In the original Generalized Geometry Holdup (GGH) model, all holdup deposits were modeled as points, lines, and areas[1, 5]. Two improvements[4] were recently made to the GGH model and are currently in use at the Y-12 National Security Complex. These two improvements are the finite-source correction CF{sub g} and the self-attenuation correction. The finite-source correction corrects the average detector response for the width of point and line geometries which in effect, converts points and lines into areas. The result of a holdup measurement of an area deposit is a density-thickness which is converted to mass by multiplying it by the area of the deposit. From the measured density-thickness, the true density-thickness can be calculated by correcting for the material self-attenuation. Therefore the self-attenuation correction is applied to finite point and line deposits as well as areas. This report demonstrates that the finite-source and self-attenuation corrections also provide a means to better separate the gamma rays emitted by the material from the gamma rays emitted by background sources for an improved background correction. Currently, the measured background radiation is attenuated for equipment walls in the case of area deposits but not for line and point sources. The measured background radiation is not corrected for attenuation by the uranium material. For all of these cases, the background is overestimated which causes a negative bias in the measurement. The finite-source correction and the self-attenuation correction will allow the correction of the measured background radiation for both the equipment attenuation and material attenuation for area sources as well as point and line sources.

  8. Quantitative SPECT reconstruction using CT-derived corrections

    NASA Astrophysics Data System (ADS)

    Willowson, Kathy; Bailey, Dale L.; Baldock, Clive

    2008-06-01

    A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [99mTc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of ±7%.

  9. High-Precision Tungsten Isotopic Analysis by Multicollection Negative Thermal Ionization Mass Spectrometry Based on Simultaneous Measurement of W and (18)O/(16)O Isotope Ratios for Accurate Fractionation Correction.

    PubMed

    Trinquier, Anne; Touboul, Mathieu; Walker, Richard J

    2016-02-01

    Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng. PMID:26751903

  10. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2015-08-01

    The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in

  11. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  12. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    SciTech Connect

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  13. Body Deformation Correction for SPECT Imaging

    PubMed Central

    Gu, Songxiang; McNamara, Joseph E.; Mitra, Joyeeta; Gifford, Howard C.; Johnson, Karen; Gennert, Michael A.; King, Michael A.

    2010-01-01

    Patient motion degrades the quality of SPECT studies. Body bend and twist are types of patient deformation, which may occur during SPECT imaging, and which has been generally ignored in SPECT motion correction strategies. To correct for these types of motion, we propose a deformation model and its inclusion within an iterative reconstruction algorithm. Two experiments were conducted to investigate the applicability of our model. In the first experiment, the return of the postmotion-compensation locations of markers on the body-surface of a volunteer to approximate their original coordinates is used to examine our method of estimating the parameters of our model and the parameters’ use in undoing deformation. The second experiment employed simulated projections of the MCAT phantom formed using an analytical projector which includes attenuation and distance-dependent resolution to investigate applications of our model in reconstruction. We demonstrate in the simulation studies that twist and bend can significantly degrade SPECT image quality visually. Our correction strategy is shown to be able to greatly diminish the degradation seen in the slices, provided the parameters are estimated accurately. We view this work as a first step towards being able to estimate and correct patient deformation based on information obtained from marker tracking data. PMID:20336188

  14. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2016-02-01

    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  15. Global Attenuation Model of the Upper Mantle

    NASA Astrophysics Data System (ADS)

    Adenis, A.; Debayle, E.; Ricard, Y. R.

    2015-12-01

    We present a three-dimensional shear attenuation model based on a massive surface wave data-set (372,629 Rayleigh waveforms analysed in the period range 50-300s by Debayle and Ricard, 2012). For each seismogram, this approach yields depth-dependent path average models of shear velocity and quality factor, and a set of fundamental and higher-mode dispersion and attenuation curves. We combine these attenuation measurements in a tomographic inversion after a careful rejection of the noisy data. We first remove data likely to be biased by a poor knowledge of the source. Then we assume that waves corresponding to events having close epicenters and recorded at the same station sample the same elastic and anelastic structure, we cluster the corresponding rays and average the attenuation measurements. Logarithms of the attenuations are regionalized using the non-linear east square formalism of Tarantola and Valette (1982), resulting in attenuation tomographic maps between 50s and 300s. After a first inversion, outlyers are rejected and a second inversion yields a moderate variance reduction of about 20%. We correct the attenuation curves for focusing effect using the linearized ray theory of Woodhouse and Wong (1986). Accounting for focussing effects allows building tomographic maps with variance reductions reaching 40%. In the period range 120-200s, the root mean square of the model perturbations increases from about 5% to 20%. Our 3-D attenuation models present strong agreement with surface tectonics at period lower than 200s. Areas of low attenuation are located under continents and areas of high attenuation are associated with oceans. Surprisingly, although mid oceanic ridges are located in attenuating regions, their signature, even if enhanced by focusing corrections, remains weaker than in the shear velocity models. Synthetic tests suggests that regularisation contributes to damp the attenuation signature of ridges, which could therefore be underestimated.

  16. Rotary antenna attenuator

    NASA Technical Reports Server (NTRS)

    Dickinson, R. M.; Hardy, J. C.

    1969-01-01

    Radio frequency attenuator, having negligible insertion loss at minimum attenuation, can be used for making precise antenna gain measurements. It is small in size compared to a rotary-vane attenuator.

  17. Characterizing ultraviolet and infrared observational properties for galaxies. II. Features of attenuation law

    SciTech Connect

    Mao, Ye-Wei; Kong, Xu; Lin, Lin E-mail: xkong@ustc.edu.cn

    2014-07-01

    Variations in the attenuation law have a significant impact on observed spectral energy distributions for galaxies. As one important observational property for galaxies at ultraviolet and infrared wavelength bands, the correlation between infrared-to-ultraviolet luminosity ratio and ultraviolet color index (or ultraviolet spectral slope), i.e., the IRX-UV relation (or IRX-β relation), offered a widely used formula for correcting dust attenuation in galaxies, but the usability appears to be in doubt now because of considerable dispersion in this relation found by many studies. In this paper, on the basis of spectral synthesis modeling and spatially resolved measurements of four nearby spiral galaxies, we provide an interpretation of the deviation in the IRX-UV relation with variations in the attenuation law. From both theoretical and observational viewpoints, two components in the attenuation curve, the linear background and the 2175 Å bump, are suggested to be the parameters in addition to the stellar population age (addressed in the first paper of this series) in the IRX-UV function; different features in the attenuation curve are diagnosed for the galaxies in our sample. Nevertheless, it is often difficult to ascertain the attenuation law for galaxies in actual observations. Possible reasons for preventing the successful detection of the parameters in the attenuation curve are also discussed in this paper, including the degeneracy of the linear background and the 2175 Å bump in observational channels, the requirement for young and dust-rich systems to study, and the difficulty in accurate estimates of dust attenuations at different wavelength bands.

  18. SPECT Compton-scattering correction by analysis of energy spectra.

    PubMed

    Koral, K F; Wang, X Q; Rogers, W L; Clinthorne, N H; Wang, X H

    1988-02-01

    The hypothesis that energy spectra at individual spatial locations in single photon emission computed tomographic projection images can be analyzed to separate the Compton-scattered component from the unscattered component is tested indirectly. An axially symmetric phantom consisting of a cylinder with a sphere is imaged with either the cylinder or the sphere containing 99mTc. An iterative peak-erosion algorithm and a fitting algorithm are given and employed to analyze the acquired spectra. Adequate separation into an unscattered component and a Compton-scattered component is judged on the basis of filtered-backprojection reconstruction of corrected projections. In the reconstructions, attenuation correction is based on the known geometry and the total attenuation cross section for water. An independent test of the accuracy of separation is not made. For both algorithms, reconstructed slices for the cold-sphere, hot-surround phantom have the correct shape as confirmed by simulation results that take into account the measured dependence of system resolution on depth. For the inverse phantom, a hot sphere in a cold surround, quantitative results with the fitting algorithm are accurate but with a particular number of iterations of the erosion algorithm are less good. (A greater number of iterations would improve the 26% error with the algorithm, however.) These preliminary results encourage us to believe that a method for correcting for Compton-scattering in a wide variety of objects can be found, thus helping to achieve quantitative SPECT. PMID:3258023

  19. DC attenuation meter

    DOEpatents

    Hargrove, Douglas L.

    2004-09-14

    A portable, hand-held meter used to measure direct current (DC) attenuation in low impedance electrical signal cables and signal attenuators. A DC voltage is applied to the signal input of the cable and feedback to the control circuit through the signal cable and attenuators. The control circuit adjusts the applied voltage to the cable until the feedback voltage equals the reference voltage. The "units" of applied voltage required at the cable input is the system attenuation value of the cable and attenuators, which makes this meter unique. The meter may be used to calibrate data signal cables, attenuators, and cable-attenuator assemblies.

  20. Assimilation of attenuated data from X-band network radars using ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Cheng, Jing

    To use reflectivity data from X-band radars for quantitative precipitation estimation and storm-scale data assimilation, the effect of attenuation must be properly accounted for. Traditional approaches try to make correction to the attenuated reflectivity first before using the data. An alternative, theoretically more attractive approach builds the attenuation effect into the reflectivity observation operator of a data assimilation system, such as an ensemble Kalman filter (EnKF), allowing direct assimilation of the attenuated reflectivity and taking advantage of microphysical state estimation using EnKF methods for a potentially more accurate solution. This study first tests the approach for the CASA (Center for Collaborative Adaptive Sensing of the Atmosphere) X-band radar network configuration through observing system simulation experiments (OSSE) for a quasi-linear convective system (QLCS) that has more significant attenuation than isolated storms. To avoid the problem of potentially giving too much weight to fully attenuated reflectivity, an analytical, echo-intensity-dependent model for the observation error (AEM) is developed and is found to improve the performance of the filter. By building the attenuation into the forward observation operator and combining it with the application of AEM, the assimilation of attenuated CASA observations is able to produce a reasonably accurate analysis of the QLCS inside CASA radar network coverage. Compared with foregoing assimilation of radar data with weak radar reflectivity or assimilating only radial velocity data, our method can suppress the growth of spurious echoes while obtaining a more accurate analysis in the terms of root-mean-square (RMS) error. Sensitivity experiments are designed to examine the effectiveness of AEM by introducing multiple sources of observation errors into the simulated observations. The performance of such an approach in the presence of resolution-induced model error is also evaluated and

  1. Joint Estimation of Activity and Attenuation in Whole-Body TOF PET/MRI Using Constrained Gaussian Mixture Models.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-09-01

    It has recently been shown that the attenuation map can be estimated from time-of-flight (TOF) PET emission data using joint maximum likelihood reconstruction of attenuation and activity (MLAA). In this work, we propose a novel MRI-guided MLAA algorithm for emission-based attenuation correction in whole-body PET/MR imaging. The algorithm imposes MR spatial and CT statistical constraints on the MLAA estimation of attenuation maps using a constrained Gaussian mixture model (GMM) and a Markov random field smoothness prior. Dixon water and fat MR images were segmented into outside air, lung, fat and soft-tissue classes and an MR low-intensity (unknown) class corresponding to air cavities, cortical bone and susceptibility artifacts. The attenuation coefficients over the unknown class were estimated using a mixture of four Gaussians, and those over the known tissue classes using unimodal Gaussians, parameterized over a patient population. To eliminate misclassification of spongy bones with surrounding tissues, and thus include them in the unknown class, we heuristically suppressed fat in water images and also used a co-registered bone probability map. The proposed MLAA-GMM algorithm was compared with the MLAA algorithms proposed by Rezaei and Salomon using simulation and clinical studies with two different tracer distributions. The results showed that our proposed algorithm outperforms its counterparts in suppressing the cross-talk and scaling problems of activity and attenuation and thus produces PET images of improved quantitative accuracy. It can be concluded that the proposed algorithm effectively exploits the MR information and can pave the way toward accurate emission-based attenuation correction in TOF PET/MRI. PMID:25769148

  2. Seismic attenuation anisotropy in reservoir sedimentary rocks

    SciTech Connect

    Best, A.I.

    1994-12-31

    Seismic attenuation is a fundamental property of reservoir sedimentary rocks; it is strongly related to reservoir permeability. Knowledge of its variation with lithology, with burial depth, and with wave propagation direction is vital for understanding the attenuation mechanism. Given this information, realistic theoretical models may be constructed for predicting attenuation, and hence permeability, over a wide frequency range. Accurate ultrasonic attenuation measurements were made in the laboratory over a range of effective pressures on sandstone samples with different amounts of humic organic matter. The organic matter formed fine laminations along the bedding planes of the sandstones. The results show that the sandstones are highly attenuating at 5 MPa mainly because of the presence of grain contact microcracks giving rise to squirt flow; at 40 MPa, when most of the microcracks are closed, the clean sandstones are poorly attenuating, but the organic-rich sandstones remain highly attenuating. It is postulated that the compliant organic matter is responsible for causing squirt flow at high and at low pressures. The results also show that the maximum attenuation occurs when the particle motion of the propagating wave is perpendicular to the planes of the organic matter laminations. These results are consistent with the squirt flow theory of Akbar et al (1993) for compressional waves.

  3. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  4. Pressure surge attenuator

    DOEpatents

    Christie, Alan M.; Snyder, Kurt I.

    1985-01-01

    A pressure surge attenuation system for pipes having a fluted region opposite crushable metal foam. As adapted for nuclear reactor vessels and heads, crushable metal foam is disposed to attenuate pressure surges.

  5. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  6. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  7. Tracer attenuation in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Vladimir

    2011-12-01

    The self-purifying capacity of aquifers strongly depends on the attenuation of waterborne contaminants, i.e., irreversible loss of contaminant mass on a given scale as a result of coupled transport and transformation processes. A general formulation of tracer attenuation in groundwater is presented. Basic sensitivities of attenuation to macrodispersion and retention are illustrated for a few typical retention mechanisms. Tracer recovery is suggested as an experimental proxy for attenuation. Unique experimental data of tracer recovery in crystalline rock compare favorably with the theoretical model that is based on diffusion-controlled retention. Non-Fickian hydrodynamic transport has potentially a large impact on field-scale attenuation of dissolved contaminants.

  8. Attenuation Tomography of the Upper Mantle

    NASA Astrophysics Data System (ADS)

    Adenis, A.; Debayle, E.; Ricard, Y. R.

    2014-12-01

    We present a 3-D model of surface wave attenuation in the upper mantle. The model is constrained by a large data set of fundamental and higher Rayleigh mode observations. This data set consists of about 1,800,000 attenuation curves measured in the period range 50-300s by Debayle and Ricard (2012). A careful selection allows us to reject data for which measurements are likely biased by the poor knowledge of the scalar seismic moment or by a ray propagation too close to a node of the source radiation pattern. For each epicenter-station path, elastic focusing effects due to seismic heterogeneities are corrected using DR2012 and the data are turned into log(1/Q). The selected data are then combined in a tomographic inversion using the non-linear least square formalism of Tarantola and Valette (1982). The obtained attenuation maps are in agreement with the surface tectonic for periods and modes sensitive to the top 200km of the upper mantle. Low attenuation regions correlate with continental shields while high attenuation regions are located beneath young oceanic regions. The attenuation pattern becomes more homogeneous at depths greater than 200 km and the maps are dominated by a high quality factor signature beneath slabs. We will discuss the similarities and differences between the tomographies of seismic velocities and of attenuations.

  9. Estimation of patient attenuation factor for iodine-131 based on direct dose rate measurements from radioiodine therapy patients.

    PubMed

    Soliman, Khaled; Alenezi, Ahmed

    2015-02-01

    The aim of the study was to measure the actual dose at 1 m from the patients per unit activity with the aim of providing a more accurate prediction of the dose levels around radioiodine patients in the hospital, as well as to compare our results with the literature. In this work the demonstration of a patient body tissue attenuation factor is verified by comparing the dose rates measured from the patients with those measured from the unshielded radioiodine capsules immediately after administration of the radioactivity. The normalized dose rate per unit activity is therefore proposed as an operational quantity that can be used to predict exposure rates to staff and patients' relatives. The average dose rate measured from our patient per unit activity was 38.4±11.8 μSv/h/GBq. The calculated attenuation correction factor based on our measurements was 0.55±0.17. The calculated dose rate from a radioiodine therapy patient should normally include a factor accounting for patient body tissue attenuation and scatter. The attenuation factor is currently neglected and not applied in operational radiation protection. Realistic estimation of radiation dose levels from radioiodine therapy patients when properly performed will reduce the operational cost and optimize institutional radiation protection practice. It is recommended to include patient attenuation factors in risk assessment exercises - in particular, when accurate estimates of total effective doses to exposed individuals are required when direct measurements are not possible. The information provided about patient attenuation might benefit radiation protection specialists and regulators. PMID:25279710

  10. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  11. Iterative Beam Hardening Correction for Multi-Material Objects

    PubMed Central

    Zhao, Yunsong; Li, Mengfei

    2015-01-01

    In this paper, we propose an iterative beam hardening correction method that is applicable for the case with multiple materials. By assuming that the materials composing scanned object are known and that they are distinguishable by their linear attenuation coefficients at some given energy, the beam hardening correction problem is converted into a nonlinear system problem, which is then solved iteratively. The reconstructed image is the distribution of linear attenuation coefficient of the scanned object at a given energy. So there are no beam hardening artifacts in the image theoretically. The proposed iterative scheme combines an accurate polychromatic forward projection with a linearized backprojection. Both forward projection and backprojection have high degree of parallelism, and are suitable for acceleration on parallel systems. Numerical experiments with both simulated data and real data verifies the validity of the proposed method. The beam hardening artifacts are alleviated effectively. In addition, the proposed method has a good tolerance on the error of the estimated x-ray spectrum. PMID:26659554

  12. Iterative Beam Hardening Correction for Multi-Material Objects.

    PubMed

    Zhao, Yunsong; Li, Mengfei

    2015-01-01

    In this paper, we propose an iterative beam hardening correction method that is applicable for the case with multiple materials. By assuming that the materials composing scanned object are known and that they are distinguishable by their linear attenuation coefficients at some given energy, the beam hardening correction problem is converted into a nonlinear system problem, which is then solved iteratively. The reconstructed image is the distribution of linear attenuation coefficient of the scanned object at a given energy. So there are no beam hardening artifacts in the image theoretically. The proposed iterative scheme combines an accurate polychromatic forward projection with a linearized backprojection. Both forward projection and backprojection have high degree of parallelism, and are suitable for acceleration on parallel systems. Numerical experiments with both simulated data and real data verifies the validity of the proposed method. The beam hardening artifacts are alleviated effectively. In addition, the proposed method has a good tolerance on the error of the estimated x-ray spectrum. PMID:26659554

  13. Aerosol effects and corrections in the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Hervig, Mark E.; Russell, James M., III; Gordley, Larry L.; Daniels, John; Drayson, S. Roland; Park, Jae H.

    1995-01-01

    The eruptions of Mt. Pinatubo in June 1991 increased stratospheric aerosol loading by a factor of 30, affecting chemistry, radiative transfer, and remote measurements of the stratosphere. The Halogen Occultation Experiment (HALOE) instrument on board Upper Atmosphere Research Satellite (UARS) makes measurements globally for inferring profiles of NO2, H2O, O3, HF, HCl, CH4, NO, and temperature in addition to aerosol extinction at five wavelengths. Understanding and removing the aerosol extinction is essential for obtaining accurate retrievals from the radiometer channels of NO2, H2O and O3 in the lower stratosphere since these measurements are severely affected by contaminant aerosol absorption. If ignored, aerosol absorption in the radiometer measurements is interpreted as additional absorption by the target gas, resulting in anomalously large mixing ratios. To correct the radiometer measurements for aerosol effects, a retrieved aerosol extinction profile is extrapolated to the radiometer wavelengths and then included as continuum attenuation. The sensitivity of the extrapolation to size distribution and composition is small for certain wavelength combinations, reducing the correction uncertainty. The aerosol corrections extend the usable range of profiles retrieved from the radiometer channels to the tropopause with results that agree well with correlative measurements. In situations of heavy aerosol loading, errors due to aerosol in the retrieved mixing ratios are reduced to values of about 15, 25, and 60% in H2O, O3, and NO2, respectively, levels that are much less than the correction magnitude.

  14. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  15. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  16. Variable laser attenuator

    DOEpatents

    Foltyn, Stephen R.

    1988-01-01

    The disclosure relates to low loss, high power variable attenuators comprng one or more transmissive and/or reflective multilayer dielectric filters. The attenuator is particularly suitable to use with unpolarized lasers such as excimer lasers. Beam attenuation is a function of beam polarization and the angle of incidence between the beam and the filter and is controlled by adjusting the angle of incidence the beam makes to the filter or filters. Filters are selected in accordance with beam wavelength.

  17. Variable laser attenuator

    DOEpatents

    Foltyn, S.R.

    1987-05-29

    The disclosure relates to low loss, high power variable attenuators comprising one or more transmissive and/or reflective multilayer dielectric filters. The attenuator is particularly suitable to use with unpolarized lasers such as excimer lasers. Beam attenuation is a function of beam polarization and the angle of incidence between the beam and the filter and is controlled by adjusting the angle of incidence the beam makes to the filter or filters. Filters are selected in accordance with beam wavelength. 9 figs.

  18. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  19. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  20. Quantitative fully 3D PET via model-based scatter correction

    SciTech Connect

    Ollinger, J.M.

    1994-05-01

    We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.

  1. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  2. Landing gear noise attenuation

    NASA Technical Reports Server (NTRS)

    Moe, Jeffrey W. (Inventor); Whitmire, Julia (Inventor); Kwan, Hwa-Wan (Inventor); Abeysinghe, Amal (Inventor)

    2011-01-01

    A landing gear noise attenuator mitigates noise generated by airframe deployable landing gear. The noise attenuator can have a first position when the landing gear is in its deployed or down position, and a second position when the landing gear is in its up or stowed position. The noise attenuator may be an inflatable fairing that does not compromise limited space constraints associated with landing gear retraction and stowage. A truck fairing mounted under a truck beam can have a compliant edge to allow for non-destructive impingement of a deflected fire during certain conditions.

  3. RADIO FREQUENCY ATTENUATOR

    DOEpatents

    Giordano, S.

    1963-11-12

    A high peak power level r-f attenuator that is readily and easily insertable along a coaxial cable having an inner conductor and an outer annular conductor without breaking the ends thereof is presented. Spaced first and second flares in the outer conductor face each other with a slidable cylindrical outer conductor portion therebetween. Dielectric means, such as water, contact the cable between the flares to attenuate the radio-frequency energy received thereby. The cylindrical outer conductor portion is slidable to adjust the voltage standing wave ratio to a low level, and one of the flares is slidable to adjust the attenuation level. An integral dielectric container is also provided. (AFC)

  4. GPR measurements of attenuation in concrete

    SciTech Connect

    Eisenmann, David Margetan, Frank J. Pavel, Brittney

    2015-03-31

    Ground-penetrating radar (GPR) signals from concrete structures are affected by several phenomenon, including: (1) transmission and reflection coefficients at interfaces; (2) the radiation patterns of the antenna(s) being used; and (3) the material properties of concrete and any embedded objects. In this paper we investigate different schemes for determining the electromagnetic (EM) attenuation of concrete from measured signals obtained using commercially-available GPR equipment. We adapt procedures commonly used in ultrasonic inspections where one compares the relative strengths of two or more signals having different travel paths through the material of interest. After correcting for beam spread (i.e., diffraction), interface phenomena, and equipment amplification settings, any remaining signal differences are assumed to be due to attenuation thus allowing the attenuation coefficient (say, in dB of loss per inch of travel) to be estimated. We begin with a brief overview of our approach, and then discuss how diffraction corrections were determined for our two 1.6 GHz GPR antennas. We then present results of attenuation measurements for two types of concrete using both pulse/echo and pitch/catch measurement setups.

  5. Atmospheric extinction in solar tower plants: the Absorption and Broadband Correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-05-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional

  6. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-12-01

    We have developed and tested technology for a new type of direct hydrocarbon detection. The method uses inelastic rock properties to greatly enhance the sensitivity of surface seismic methods to the presence of oil and gas saturation. These methods include use of energy absorption, dispersion, and attenuation (Q) along with traditional seismic attributes like velocity, impedance, and AVO. Our approach is to combine three elements: (1) a synthesis of the latest rock physics understanding of how rock inelasticity is related to rock type, pore fluid types, and pore microstructure, (2) synthetic seismic modeling that will help identify the relative contributions of scattering and intrinsic inelasticity to apparent Q attributes, and (3) robust algorithms that extract relative wave attenuation attributes from seismic data. This project provides: (1) Additional petrophysical insight from acquired data; (2) Increased understanding of rock and fluid properties; (3) New techniques to measure reservoir properties that are not currently available; and (4) Provide tools to more accurately describe the reservoir and predict oil location and volumes. These methodologies will improve the industry's ability to predict and quantify oil and gas saturation distribution, and to apply this information through geologic models to enhance reservoir simulation. We have applied for two separate patents relating to work that was completed as part of this project.

  7. Attenuator And Conditioner

    DOEpatents

    Anderson, Gene R.; Armendariz, Marcelino G.; Carson, Richard F.; Bryan, Robert P.; Duckett, III, Edwin B.; Kemme, Shanalyn Adair; McCormick, Frederick B.; Peterson, David W.

    2006-04-04

    An apparatus and method of attenuating and/or conditioning optical energy for an optical transmitter, receiver or transceiver module is disclosed. An apparatus for attenuating the optical output of an optoelectronic connector including: a mounting surface; an array of optoelectronic devices having at least a first end; an array of optical elements having at least a first end; the first end of the array of optical elements optically aligned with the first end of the array of optoelectronic devices; an optical path extending from the first end of the array of optoelectronic devices and ending at a second end of the array of optical elements; and an attenuator in the optical path for attenuating the optical energy emitted from the array of optoelectronic devices. Alternatively, a conditioner may be adapted in the optical path for conditioning the optical energy emitted from the array of optoelectronic devices.

  8. Fiber Optic Attenuators

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Mike Buzzetti designed a fiber optic attenuator while working at Jet Propulsion Laboratory, intended for use in NASA's Deep Space Network. Buzzetti subsequently patented and received an exclusive license to commercialize the device, and founded Nanometer Technologies to produce it. The attenuator functions without introducing measurable back-reflection or insertion loss, and is relatively insensitive to vibration and changes in temperature. Applications include cable television, telephone networks, other signal distribution networks, and laboratory instrumentation.

  9. Analytical solution to 3D SPECT reconstruction with non-uniform attenuation, scatter, and spatially-variant resolution variation for variable focal-length fan-beam collimators

    NASA Astrophysics Data System (ADS)

    Wen, Junhai; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2003-05-01

    In the past decades, analytical (non-iterative) methods have been extensively investigated and developed for the reconstruction of three-dimensional (3D) single-photon emission computed tomography (SPECT). However, it becomes possible only recently when the exact analytic non-uniform attenuation reconstruction algorithm was derived. Based on the explicit inversion formula for the attenuated Radon transform discovered by Novikov (2000), we extended the previous researches of inverting the attenuated Radon transform of parallel-beam collimation geometry to fan-beam and variable focal-length fan-beam (VFF) collimators and proposed an efficient, analytical solution to 3D SPECT reconstruction with VFF collimators, which compensates simultaneously for non-uniform attenuation, scatter, and spatially-variant or distance-dependent resolution variation (DDRV), as well as suppression of signal-dependent non-stationary Poisson noise. In this procedure, to avoid the reconstructed images being corrupted by the presence of severe noise, we apply a Karhune-Loève (K-L) domain adaptive Wiener filter, which accurately treats the non-stationary Poisson noise. The scatter is then removed by our scatter estimation method, which is based on the energy spectrum and modified from the triple-energy-window acquisition protocol. For the correction of DDRV, a distance-dependent deconvolution is adapted to provide a solution that realistically characterizes the resolution kernel in a real SPECT system. Finally image is reconstructed using our VFF non-uniform attenuation inversion formula.

  10. Developing a Short-Period, Fundamental-Mode Rayleigh-Wave Attenuation Model for Asia

    NASA Astrophysics Data System (ADS)

    Yang, X.; Levshin, A. L.; Barmin, M. P.; Ritzwoller, M. H.

    2008-12-01

    We are developing a 2D, short-period (12 - 22 s), fundamental-mode Rayleigh-wave attenuation model for Asia. This model can be used to invert for a 3D attenuation model of the Earth's crust and upper mantle as well as to implement more accurate path corrections in regional surface-wave magnitude calculations. The prerequisite for developing a reliable Rayleigh-wave attenuation model is the availability of accurate fundamental-mode Rayleigh-wave amplitude measurements. Fundamental-mode Rayleigh-wave amplitudes could be contaminated by a variety of sources such as multipathing, focusing and defocusing, body wave, higher-mode surface wave, and other noise sources. These contaminations must be reduced to the largest extent possible. To achieve this, we designed a procedure by taking advantage of certain Rayleigh-wave characteristics, such as dispersion and elliptical particle motion, for accurate amplitude measurements. We first analyze the dispersion of the surface-wave data using a spectrogram. Based on the characteristics of the data dispersion, we design a phase-matched filter by using either a manually picked dispersion curve, or a group-velocity-model predicted dispersion curve, or the dispersion of the data, and apply the filter to the seismogram. Intelligent filtering of the seismogram and windowing of the resulting cross-correlation based on the spectrogram analysis and the comparison between the phase-match filtered data spectrum, the raw-data spectrum and the theoretical source spectrum effectively reduces amplitude contaminations and results in reliable amplitude measurements in many cases. We implemented these measuring techniques in a graphic-user-interface tool called Surface Wave Amplitude Measurement Tool (SWAMTOOL). Using the tool, we collected and processed waveform data for 200 earthquakes occurring throughout 2003-2006 inside and around Eurasia. The records from 135 broadband stations were used. After obtaining the Rayleigh-wave amplitude

  11. Timebias corrections to predictions

    NASA Technical Reports Server (NTRS)

    Wood, Roger; Gibbs, Philip

    1993-01-01

    The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.

  12. Flat panel X-ray detector with reduced internal scattering for improved attenuation accuracy and dynamic range

    DOEpatents

    Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.

    2010-10-12

    An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.

  13. The performance of functional methods for correcting non-Gaussian measurement error within Poisson regression: corrected excess risk of lung cancer mortality in relation to radon exposure among French uranium miners.

    PubMed

    Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

    2012-12-30

    A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study. PMID:22996087

  14. A method to correct for spectral artifacts in optical-CT dosimetry

    PubMed Central

    Pierquet, Michael; Jordan, Kevin; Oldham, Mark

    2011-01-01

    The recent emergence of radiochromic dosimeters with low inherent light-scattering presents the possibility of fast 3D dosimetry using broad-beam optical computed tomography (optical-CT). Current broad beam scanners typically employ either a single or a planar array of light-emitting diodes (LED) for the light source. The spectrum of light from LED sources is polychromatic and this, in combination with the non-uniform spectral absorption of the dosimeter, can introduce spectral artifacts arising from preferential absorption of photons at the peak absorption wavelengths in the dosimeter. Spectral artifacts can lead to large errors in the reconstructed attenuation coefficients, and hence dose measurement. This work presents an analytic method for correcting for spectral artifacts which can be applied if the spectral characteristics of the light source, absorbing dosimeter, and imaging detector are known or can be measured. The method is implemented here for a PRESAGE® dosimeter scanned with the DLOS telecentric scanner (Duke Large field-of-view Optical-CT Scanner). Emission and absorption profiles were measured with a commercial spectrometer and spectrophotometer, respectively. Simulations are presented that show spectral changes can introduce errors of 8% for moderately attenuating samples where spectral artifacts are less pronounced. The correction is evaluated by application to a 16 cm diameter PRESAGE® cylindrical dosimeter irradiated along the axis with two partially overlapping 6 × 6 cm fields of different doses. The resulting stepped dose distribution facilitates evaluation of the correction as each step had different spectral contributions. The spectral artifact correction was found to accurately correct the reconstructed coefficients to within ~1.5%, improved from ~7.5%, for normalized dose distributions. In conclusion, for situations where spectral artifacts cannot be removed by physical filters, the method shown here is an effective correction. Physical

  15. A rigid motion correction method for helical computed tomography (CT)

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  16. Robust diffraction correction method for high-frequency ultrasonic tissue characterization

    NASA Astrophysics Data System (ADS)

    Raju, Balasundar

    2001-05-01

    The computation of quantitative ultrasonic parameters such as the attenuation or backscatter coefficient requires compensation for diffraction effects. In this work a simple and accurate diffraction correction method for skin characterization requiring only a single focal zone is developed. The advantage of this method is that the transducer need not be mechanically repositioned to collect data from several focal zones, thereby reducing the time of imaging and preventing motion artifacts. Data were first collected under controlled conditions from skin of volunteers using a high-frequency system (center frequency=33 MHz, BW=28 MHz) at 19 focal zones through axial translation. Using these data, mean backscatter power spectra were computed as a function of the distance between the transducer and the tissue, which then served as empirical diffraction correction curves for subsequent data. The method was demonstrated on patients patch-tested for contact dermatitis. The computed attenuation coefficient slope was significantly (p<0.05) lower at the affected site (0.13+/-0.02 dB/mm/MHz) compared to nearby normal skin (0.2+/-0.05 dB/mm/MHz). The mean backscatter level was also significantly lower at the affected site (6.7+/-2.1 in arbitrary units) compared to normal skin (11.3+/-3.2). These results show diffraction corrected ultrasonic parameters can differentiate normal from affected skin tissues.

  17. Feed-forward digital phase and amplitude correction system

    DOEpatents

    Yu, D.U.L.; Conway, P.H.

    1994-11-15

    Phase and amplitude modifications in repeatable RF pulses at the output of a high power pulsed microwave amplifier are made utilizing a digital feed-forward correction system. A controlled amount of the output power is coupled to a correction system for processing of phase and amplitude information. The correction system comprises circuitry to compare the detected phase and amplitude with the desired phase and amplitude, respectively, and a digitally programmable phase shifter and attenuator and digital logic circuitry to control the phase shifter and attenuator. The phase and amplitude of subsequent are modified by output signals from the correction system. 11 figs.

  18. Feed-forward digital phase and amplitude correction system

    DOEpatents

    Yu, David U. L.; Conway, Patrick H.

    1994-01-01

    Phase and amplitude modifications in repeatable RF pulses at the output of a high power pulsed microwave amplifier are made utilizing a digital feed-forward correction system. A controlled amount of the output power is coupled to a correction system for processing of phase and amplitude information. The correction system comprises circuitry to compare the detected phase and amplitude with the desired phase and amplitude, respectively, and a digitally programmable phase shifter and attenuator and digital logic circuitry to control the phase shifter and attenuator. The Phase and amplitude of subsequent are modified by output signals from the correction system.

  19. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  20. MNAtoolbox: A Monitored Natural Attenuation Site Screening Program

    SciTech Connect

    Borns, David J.; Brady, Patrick V.; Brady, Warren D.; Krupka, Kenneth M.; Spalding, Brian P.; Waters, Robert D.; Zhang, Pengchu

    1999-07-12

    Screening of sites for the potential application and reliance upon monitored natural attenuation (MNA) can be done using MNAtoolbox, a web-based tool for estimating extent of biodegradation, chemical transformation, and dilution. MNAtoolbox uses site-specific input data, where available (default parameters are taken from the literature), to roughly quantify the nature and extent of attenuation at a particular site. Use of MNAtoolbox provides 3 important elements of site evaluation: (1) Identifies likely attenuation pathways, (2) Clearly identifies sites where MNA is inappropriate, and (3) Evaluates data requirements for subsequent reliance on MNA as a sole or partial corrective action.

  1. Shear wave speed dispersion and attenuation in granular marine sediments.

    PubMed

    Kimura, Masao

    2013-07-01

    The reported compressional wave speed dispersion and attenuation could be explained by a modified gap stiffness model incorporated into the Biot model (the BIMGS model). In contrast, shear wave speed dispersion and attenuation have not been investigated in detail. No measurements of shear wave speed dispersion have been reported, and only Brunson's data provide the frequency characteristics of shear wave attenuation. In this study, Brunson's attenuation measurements are compared to predictions using the Biot-Stoll model and the BIMGS model. It is shown that the BIMGS model accurately predicts the frequency dependence of shear wave attenuation. Then, the shear wave speed dispersion and attenuation in water-saturated silica sand are measured in the frequency range of 4-20 kHz. The vertical stress applied to the sample is 17.6 kPa. The temperature of the sample is set to be 5 °C, 20 °C, and 35 °C in order to change the relaxation frequency in the BIMGS model. The measured results are compared with those calculated using the Biot-Stoll model and the BIMGS model. It is shown that the shear wave speed dispersion and attenuation are predicted accurately by using the BIMGS model. PMID:23862793

  2. Asymmetric scatter kernels for software-based scatter correction of gridless mammography

    NASA Astrophysics Data System (ADS)

    Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh

    2015-03-01

    Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.

  3. Radiofrequency attenuator and method

    DOEpatents

    Warner, Benjamin P.; McCleskey, T. Mark; Burrell, Anthony K.; Agrawal, Anoop; Hall, Simon B.

    2009-01-20

    Radiofrequency attenuator and method. The attenuator includes a pair of transparent windows. A chamber between the windows is filled with molten salt. Preferred molten salts include quarternary ammonium cations and fluorine-containing anions such as tetrafluoroborate (BF.sub.4.sup.-), hexafluorophosphate (PF.sub.6.sup.-), hexafluoroarsenate (AsF.sub.6.sup.-), trifluoromethylsulfonate (CF.sub.3SO.sub.3.sup.-), bis(trifluoromethylsulfonyl)imide ((CF.sub.3SO.sub.2).sub.2N.sup.-), bis(perfluoroethylsulfonyl)imide ((CF.sub.3CF.sub.2SO.sub.2).sub.2N.sup.-) and tris(trifluoromethylsulfonyl)methide ((CF.sub.3SO.sub.2).sub.3C.sup.-). Radicals or radical cations may be added to or electrochemically generated in the molten salt to enhance the RF attenuation.

  4. Radiofrequency attenuator and method

    DOEpatents

    Warner, Benjamin P.; McCleskey, T. Mark; Burrell, Anthony K.; Agrawal, Anoop; Hall, Simon B.

    2009-11-10

    Radiofrequency attenuator and method. The attenuator includes a pair of transparent windows. A chamber between the windows is filled with molten salt. Preferred molten salts include quarternary ammonium cations and fluorine-containing anions such as tetrafluoroborate (BF.sub.4.sup.-), hexafluorophosphate (PF.sub.6.sup.-), hexafluoroarsenate (AsF.sub.6.sup.-), trifluoromethylsulfonate (CF.sub.3SO.sub.3.sup.-), bis(trifluoromethylsulfonyl)imide ((CF.sub.3SO.sub.2).sub.2N.sup.-), bis(perfluoroethylsulfonyl)imide ((CF.sub.3CF.sub.2SO.sub.2).sub.2N.sup.-) and tris(trifluoromethylsulfonyl)methide ((CF.sub.3SO.sub.2).sub.3 C.sup.-). Radicals or radical cations may be added to or electrochemically generated in the molten salt to enhance the RF attenuation.

  5. Seismic attenuation in Florida

    SciTech Connect

    Bellini, J.J.; Bartolini, T.J.; Lord, K.M.; Smith, D.L. . Dept. of Geology)

    1993-03-01

    Seismic signals recorded by the expanded distribution of earthquake seismograph stations throughout Florida and data from a comprehensive review of record archives from stations GAI contribute to an initial seismic attenuation model for the Florida Plateau. Based on calculations of surface particle velocity, a pattern of attenuation exists that appears to deviate from that established for the remainder of the southeastern US. Most values suggest greater seismic attenuation within the Florida Plateau. However, a separate pattern may exist for those signals arising from the Gulf of Mexico. These results have important implications for seismic hazard assessments in Florida and may be indicative of the unique lithospheric identity of the Florida basement as an exotic terrane.

  6. Detection and measurement of gamma-ray self-attenuation in plutonium residues

    SciTech Connect

    Prettyman, T.H.; Foster, L.A.; Estep, R.J.

    1996-09-01

    A new method to correct for self-attenuation in gamma-ray assays of plutonium is presented. The underlying assumptions of the technique are based on a simple but accurate physical model of plutonium residues, particularly pyrochemical salts, in which it is assumed that the plutonium is divided into two portions, each of which can be treated separately from the standpoint of gamma-ray analysis: a portion that is in the form of plutonium metal shot; and a dilute portion that is mixed with the matrix. The performance of the technique is evaluated using assays of plutonium residues by tomographic gamma scanning at the Los Alamos Plutonium Facility. The ability of the method to detect saturation conditions is examined.

  7. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404

  8. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    SciTech Connect

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  9. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  10. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  11. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  12. Tritium Attenuation by Distillation

    SciTech Connect

    Wittman, N.E.

    2001-07-31

    The objective of this study was to determine how a 100 Area distillation system could be used to reduce to a satisfactory low value the tritium content of the dilute moderator produced in the 100 Area stills, and whether such a tritium attenuator would have sufficient capacity to process all this material before it is sent to the 400 Area for reprocessing.

  13. Dead-time Corrected Disdrometer Data

    DOE Data Explorer

    Bartholomew, Mary Jane

    2008-03-05

    Original and dead-time corrected disdrometer results for observations made at SGP and TWP. The correction is based on the technique discussed in Sheppard and Joe, 1994. In addition, these files contain calculated radar reflectivity factor, mean Doppler velocity and attenuation for every measurement for both the original and dead-time corrected data at the following wavelengths: 0.316, 0.856, 3.2, 5, and 10cm (W,K,X,C,S bands). Pavlos Kollias provided the code to do these calculations.

  14. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  15. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  16. Straylight correction to Doppler rotation measurements

    NASA Astrophysics Data System (ADS)

    Andersen, B. N.

    1985-07-01

    The correction of the Pierce and LoPresto (1984) Doppler data on the plasma rotation rate for stray light increases the observed equatorial rotation velocity from 1977 to 2004 m/sec. This correction has an uncertainty of approximately 10 m/sec, because the accurate form of the stray light function is not available. The correction is noted to be largest for the blue lines, in virtue of increased scattering, and for the weak lines, due to the limb effect.

  17. A compact rotary vane attenuator

    NASA Technical Reports Server (NTRS)

    Nixon, D. L.; Otosh, T. Y.; Stelzried, C. T.

    1969-01-01

    Rotary vane attenuator, when used as a front end attenuator, introduces an insertion loss that is proportional to the angle of rotation. New technique allows the construction of a shortened compact unit suitable for most installations.

  18. Radioactive smart probe for potential corrected matrix metalloproteinase imaging.

    PubMed

    Huang, Chiun-Wei; Li, Zibo; Conti, Peter S

    2012-11-21

    Although various activatable optical probes have been developed to visualize metalloproteinase (MMP) activities in vivo, precise quantification of the enzyme activity is limited due to the inherent scattering and attenuation (limited depth penetration) properties of optical imaging. In this investigation, a novel activatable peptide probe (64)Cu-BBQ650-PLGVR-K(Cy5.5)-E-K(DOTA)-OH was constructed to detect tumor MMP activity in vivo. This agent is optically quenched in its native form, but releases strong fluorescence upon cleavage by selected enzymes. MMP specificity was confirmed both in vitro and in vivo by fluorescent imaging studies. The use of a single modality to image biomarkers/processes may lead to erroneous interpretation of imaging data. The introduction of a quantitative imaging modality, such as PET, would make it feasible to correct the enzyme activity determined from optical imaging. In this proof of principle report, we demonstrated the feasibility of correcting the activatable optical imaging data through the PET signal. This approach provides an attractive new strategy for accurate imaging of MMP activity, which may also be applied for other protease imaging. PMID:23025637

  19. Mapping Pn amplitude spreading and attenuation in Asia

    SciTech Connect

    Yang, Xiaoning; Phillips, William S; Stead, Richard J

    2010-12-06

    Pn travels most of its path in the mantle lid. Mapping the lateral variation of Pn amplitude attenuation sheds light on material properties and dynamics of the uppermost region of the mantle. Pn amplitude variation depends on the wavefront geometric spreading as well as material attenuation. We investigated Pn geometric spreading, which is much more complex than a traditionally assumed power-law spreading model, using both synthetic and observed amplitude data collected in Asia. We derived a new Pn spreading model based on the formulation that was proposed previously to account for the spherical shape of the Earth (Yang et. al., BSSA, 2007). New parameters derived for the spreading model provide much better correction for Pn amplitudes in terms of residual behavior. Because we used observed Pn amplitudes to construct the model, the model incorporates not only the effect of the Earth's spherical shape, but also the effect of potential upper-mantle velocity gradients in the region. Using the new spreading model, we corrected Pn amplitudes measured at 1, 2, 4 and 6 Hz and conducted attenuation tomography. The resulting Pn attenuation model correlates well with the regional geology. We see high attenuation in regions such as northern Tibetan Plateau and the western Pacific subduction zone, and low attenuation for stable blocks such as Sichuan and Tarim basins.

  20. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  1. Fluid dynamic bowtie attenuators

    NASA Astrophysics Data System (ADS)

    Szczykutowicz, Timothy P.; Hermus, James

    2015-03-01

    Fluence field modulated CT allows for improvements in image quality and dose reduction. To date, only 1-D modulators have been proposed, the extension to 2-D modulation is difficult with solid-metal attenuation-based modulators. This work proposes to use liquids and gas to attenuate the x-ray beam which can be arrayed allowing for 2-D fluence modulation. The thickness of liquid and the pressure for a given path length of gas were determined that provided the same attenuation as 30 cm of soft tissue at 80, 100, 120, and 140 kV. Gaseous Xenon and liquid Iodine, Zinc Chloride, and Cerium Chloride were studied. Additionally, we performed some proof-of-concept experiments in which (1) a single cell of liquid was connected to a reservoir which allowed the liquid thickness to be modulated and (2) a 96 cell array was constructed in which the liquid thickness in each cell was adjusted manually. Liquid thickness varied as a function of kV and chemical composition, with Zinc Chloride allowing for the smallest thickness; 1.8, 2.25, 3, and 3.6 cm compensated for 30 cm of soft tissue at 80, 100, 120, and 140 kV respectively. The 96 cell Iodine attenuator allowed for a reduction in both dynamic range to the detector and scatter to primary ratio. Successful modulation of a single cell was performed at 0, 90, and 130 degrees using a simple piston/actuator. The thickness of liquids and the Xenon gas pressure seem logistically implementable within the constraints of CBCT and diagnostic CT systems.

  2. Downhole pressure attenuation apparatus

    SciTech Connect

    Ricles, T.D.; Barton, J.A.

    1992-02-18

    This patent describes a process for preventing damage to tool strings and other downhole equipment in a well caused by pressures produced during detonation of one or more downhole explosive devices. It comprises adding to a tool string at least one pressure attenuating apparatus for attenuating the peak pressure wave and quasi-static pressure pulse produced by the explosive devices, the pressure attenuating apparatus including an initially closed relief vent including tubing means supporting a plurality of charge port assemblies each including an explosive filled shaped charge and a prestressed disc, the shaped charges interconnected by a detonating cord, the amount of explosive in each shaped charge being sufficient to rupture its associated disc without damaging surrounding tubular bodies in the well, and a vent chamber defined by the tubing means and providing a liquid free volume, and opening the relief vent substantially contemporaneously with downhole explosive device detonation by detonating the shaped charges to rupture the discs of the charge port assemblies.

  3. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  4. Accurate Measurement of Bone Density with QCT

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.

  5. Flexible graphene based microwave attenuators.

    PubMed

    Byun, Kisik; Ju Park, Yong; Ahn, Jong-Hyun; Min, Byung-Wook

    2015-02-01

    We demonstrate flexible 3 dB and 6 dB microwave attenuators using multilayer graphene grown by the chemical vapor deposition method. On the basis of the characterized results of multilayer graphene and graphene-Au ohmic contacts, the graphene attenuators are designed and measured. The flexible graphene-based attenuators have 3 dB and 6 dB attenuation with a return loss of less than -15 dB at higher than 5 GHz. The devices have shown durability in a bending cycling test of 100 times. The circuit model of the attenuator based on the characterized results matches the experimental results well. PMID:25590144

  6. Incorporating corrections for the head-holder and compensation filter when calculating skin dose during fluoroscopically guided interventions

    NASA Astrophysics Data System (ADS)

    Vijayan, Sarath; Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2015-03-01

    The skin dose tracking system (DTS) that we developed provides a color-coded illustration of the cumulative skin dose distribution on a 3D graphic of the patient during fluoroscopic procedures for immediate feedback to the interventionist. To improve the accuracy of dose calculation, we now have incorporated two additional important corrections (1) for the holder used to immobilize the head in neuro-interventions and (2) for the built-in compensation filters used for beam equalization. Both devices have been modeled in the DTS software so that beam intensity corrections can be made. The head-holder is modeled as two concentric hemi-cylindrical surfaces such that the path length between those surfaces can be determined for rays to individual points on the skin surface. The head-holder on the imaging system we used was measured to attenuate the primary x-rays by 10 to 20% for normal incidence, and up to 40% at non-normal incidence. In addition, three compensation filters of different shape are built into the collimator apparatus and were measured to have attenuation factors ranging from 58% to 99%, depending on kVp and beam filtration. These filters can translate and rotate in the beam and their motion is tracked by the DTS using the digital signal from the imaging system. When it is determined that a ray to a given point on the skin passes through the compensation filter, the appropriate attenuation correction is applied. These corrections have been successfully incorporated in the DTS software to provide a more accurate determination of skin dose.

  7. Broadband Lg Attenuation Modeling in the Middle East

    SciTech Connect

    Pasyanos, M E; Matzel, E M; Walter, W R; Rodgers, A J

    2008-08-21

    We present a broadband tomographic model of Lg attenuation in the Middle East derived from source- and site-corrected amplitudes. Absolute amplitude measurements are made on hand-selected and carefully windowed seismograms for tens of stations and thousands of crustal earthquakes resulting in excellent coverage of the region. A conjugate gradient method is used to tomographically invert the amplitude dataset of over 8000 paths over a 45{sup o} x 40{sup o} region of the Middle East. We solve for Q variation, as well as site and source terms, for a wide range of frequencies ranging from 0.5-10 Hz. We have modified the standard attenuation tomography technique to more explicitly define the earthquake source expression in terms of the seismic moment. This facilitates the use of the model to predict the expected amplitudes of new events, an important consideration for earthquake hazard or explosion monitoring applications. The attenuation results have a strong correlation to tectonics. Shields have low attenuation, while tectonic regions have high attenuation, with the highest attenuation at 1 Hz is found in eastern Turkey. The results also compare favorably to other studies in the region made using Lg propagation efficiency, Lg/Pg amplitude ratios and two-station methods. We tomographically invert the amplitude measurements for each frequency independently. In doing so, it appears the frequency-dependence of attenuation is not compatible with the power law representation of Q(f), an assumption that is often made.

  8. Broad-band Lg attenuation modelling in the Middle East

    NASA Astrophysics Data System (ADS)

    Pasyanos, Michael E.; Matzel, Eric M.; Walter, William R.; Rodgers, Arthur J.

    2009-06-01

    We present a broad-band tomographic model of Lg attenuation in the Middle East derived from source- and site-corrected amplitudes. Absolute amplitude measurements are made on hand-selected and carefully windowed seismograms for tens of stations and thousands of crustal earthquakes resulting in excellent coverage of the region. A conjugate gradient method is used to tomographically invert the amplitude data set of over 8000 paths over a 45° × 40° region of the Middle East. We solve for Q variation, as well as site and source terms, for a wide range of frequencies ranging from 0.5 to 10 Hz. We have modified the standard attenuation tomography technique to more explicitly define the earthquake source expression in terms of the seismic moment. This facilitates the use of the model to predict the expected amplitudes of new events, an important consideration for earthquake hazard or explosion monitoring applications. The attenuation results have a strong correlation to tectonics. Shields have low attenuation, whereas tectonic regions have high attenuation, with the highest attenuation at 1 Hz found in eastern Turkey. The results also compare favourably to other studies in the region made using Lg propagation efficiency, Lg/Pg amplitude ratios and two-station methods. We tomographically invert the amplitude measurements for each frequency independently. In doing so, it appears the frequency dependence of attenuation in all regions is not compatible with the power-law representation of Q(f), an assumption that is often made.

  9. A design of backing seat and gasket assembly in diamond anvil cell for accurate single crystal x-ray diffraction to 5 GPa

    NASA Astrophysics Data System (ADS)

    Komatsu, K.; Kagi, H.; Yasuzuka, T.; Koizumi, T.; Iizuka, R.; Sugiyama, K.; Yokoyama, Y.

    2011-10-01

    We designed a new cell assembly of diamond anvil cells for single crystal x-ray diffraction under pressure and demonstrate the application of the cell to the crystallographic studies for ice VI and ethanol high-pressure (HP) phase at 0.95(5) GPa and 1.95(2) GPa, respectively. The features of the assembly are: (1) the platy anvil and unique-shaped backing seat (called as "Wing seat") allowing the extremely wide opening angle up to ±65°, (2) the PFA-bulk metallic glass composite gasket allowing the easy attenuation correction and less background. Thanks to the designed assembly, the Rint values after attenuation corrections are fairly good (0.0125 and 0.0460 for ice VI and ethanol HP phase, respectively), and the errors of the refined parameters are satisfactory small even for hydrogen positions, those are comparable to the results which obtained at ambient conditions. The result for ice VI is in excellent agreement with the previous study, and that for ethanol HP phase has remarkable contributions to the revision to its structure; the H12 site, which makes gauche molecules with O1, C2, and C3 sites, may not exist so that only trans conformers are present at least at 1.95(2) GPa. The accurate intensities using the cell assembly allow us to extract the electron density for ethanol HP phase by the maximum entropy method.

  10. Ultrasonic attenuation in pearlitic steel.

    PubMed

    Du, Hualong; Turner, Joseph A

    2014-03-01

    Expressions for the attenuation coefficients of longitudinal and transverse ultrasonic waves are developed for steel with pearlitic microstructure. This type of lamellar duplex microstructure influences attenuation because of the lamellar spacing. In addition, longitudinal attenuation measurements were conducted using an unfocused transducer with 10 MHz central frequency on the cross section of a quenched railroad wheel sample. The dependence of longitudinal attenuation on the pearlite microstructure is observed from the changes of longitudinal attenuation from the quenched tread surface to deeper locations. The results show that the attenuation value is lowest and relatively constant within the quench depth, then increases linearly. The experimental results demonstrate a reasonable agreement with results from the theoretical model. Ultrasonic attenuation provides an important non-destructive method to evaluate duplex microstructure within grains which can be implemented for quality control in conjunction with other manufacturing processes. PMID:24268679

  11. Nonlinear dual reconstruction of SPECT activity and attenuation images.

    PubMed

    Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie

    2014-01-01

    In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796

  12. Nonlinear Dual Reconstruction of SPECT Activity and Attenuation Images

    PubMed Central

    Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie

    2014-01-01

    In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796

  13. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  14. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  15. Practical correction procedures for elastic electron scattering effects in ARXPS

    NASA Astrophysics Data System (ADS)

    Lassen, T. S.; Tougaard, S.; Jablonski, A.

    2001-06-01

    Angle-resolved XPS and AES (ARXPS and ARAES) are widely used for determination of the in-depth distribution of elements in the surface region of solids. It is well known that elastic electron scattering has a significant effect on the intensity as a function of emission angle and that this has a great influence on the determined overlayer thicknesses by this method. However the applied procedures for ARXPS and ARAES generally neglect this because no simple and practical procedure for correction has been available. However recently, new algorithms have been suggested. In this paper, we have studied the efficiency of these algorithms to correct for elastic scattering effects in the interpretation of ARXPS and ARAES. This is done by first calculating electron distributions by Monte Carlo simulations for well-defined overlayer/substrate systems and then to apply the different algorithms. We have found that an analytical formula based on a solution of the Boltzmann transport equation provides a good account for elastic scattering effects. However this procedure is computationally very slow and the underlying algorithm is complicated. Another much simpler algorithm, proposed by Nefedov and coworkers, was also tested. Three different ways of handling the scattering parameters within this model were tested and it was found that this algorithm also gives a good description for elastic scattering effects provided that it is slightly modified so that it takes into account the differences in the transport properties of the substrate and the overlayer. This procedure is fairly simple and is described in detail. The model gives a much more accurate description compared to the traditional straight-line approximation (SLA). However it is also found that when attenuation lengths instead of inelastic mean free paths are used in the simple SLA formalism, the effects of elastic scattering are also reasonably well accounted for. Specifically, from a systematic study of several

  16. Digitally Controlled Beam Attenuator

    NASA Astrophysics Data System (ADS)

    Peppler, W. W.; Kudva, B.; Dobbins, J. T.; Lee, C. S.; Van Lysel, M. S.; Hasegawa, B. H.; Mistretta, C. A.

    1982-12-01

    In digital fluorographic techniques the video camera must accommodate a wide dynamic range due to the large variation in the subject thickness within the field of view. Typically exposure factors and the optical aperture are selected such that the maximum video signal is obtained in the most transmissive region of the subject. Consequently, it has been shown that the signal-to-noise ratio is severely reduced in the dark regions. We have developed a prototype digital beam attenuator (DBA) which will alleviate this and some related problems in digital fluorography. The prototype DBA consists of a 6x6 array of pistons which are individually controlled. A membrane containing an attenuating solu-tion of (CeC13) in water and the piston matrix are placed between the x-ray tube and the subject. Under digital control the pistons are moved into the attenuating material in order to adjust the beam intensity over each of the 36 cells. The DBA control unit which digitizes the image during patient positioning will direct the pistons under hydraulic control to produce a uniform x-ray field exiting the subject. The pistons were designed to produce very little structural background in the image. In subtraction studies any structure would be cancelled. For non-subtraction studies such as cine-cardiology we are considering higher cell densities (eg. 64x64). Due to the narrow range of transmission provided by the DBA, in such studies ultra-high contrast films could be used to produce a high resolution quasi-subtraction display. Additional benefits of the DBA are: 1) reduced dose to the bright image areas when the dark areas are properly exposed. 2) improved scatter and glare to primary ratios, leading to improved contrast in the dark areas.

  17. Radiation Imaging and Attenuation

    NASA Astrophysics Data System (ADS)

    Davison, Candace; Yocum, Douglas

    2008-03-01

    X-ray and neutron images are used to demonstrate materials' different radiation attenuation properties. This leads to discussion of applications in medicine, industry and research. The Penn State Radiation Science and Engineering Center (RSEC) uses neutron radioscopy to image the inside of a working hydrogen fuel cell. This is one of the many educational activities that are conducted when students visit the RSEC. To encourage pre-college students to apply these principles and learn more about nuclear technology, we are sponsoring a design competition. For more information visit www.rsec.psu.edu

  18. Determining attenuation laws down to the Lyman break in z~0.3 galaxies

    NASA Astrophysics Data System (ADS)

    Boquien, Mederic

    2013-10-01

    Star formation is the fundamental process transforming baryonic matter in the Universe, and governing the cycling of gas in-and-out of galaxies. Tracing accurately star formation is of critical importance to discriminate between galaxy evolution models.The UV is where massive young stars emit the bulk of their energy and the wavelength of choice to track the evolution of the star formation across cosmic times. Presence of dust, however, impacts the UV emission from galaxies, by dimming and reddening it. Correcting the UV for dust attenuation is thus a crucial requirement to derive the physical parameters of galaxies. Significant variations from the widely used "starburst law" are observed from one galaxy to another, which may reflect systematic variations with stellar populations or galaxy morphology. These uncharacterized variations pose an important limitation to our ability to quantify properties of high-redshift galaxies, a regime where the starburst law is almost universally applied.In order to determine and parametrize attenuation laws in the UV down to the Lyman break we propose to perform COS FUV spectroscopy on a sample of 8 star-forming galaxies at z~0.3. While broadband data can constrain dust masses and optical depth, they cannot reliably constrain the attenuation law itself due to degeneracies between the competing effects of stellar populations and dust. The combination of COS spectra with existing broadband observations will be crucial to address this issue. This will allow us to constrain dust models and will have a broad impact on the study of galaxies from the galactic neighborhood to ultra-high redshifts.

  19. Radar attenuation and temperature within the Greenland Ice Sheet

    USGS Publications Warehouse

    MacGregor, Joseph A; Li, Jilu; Paden, John D; Catania, Ginny A; Clow, Gary D.; Fahnestock, Mark A; Gogineni, Prasad S.; Grimm, Robert E.; Morlighem, Mathieu; Nandi, Soumyaroop; Seroussi, Helene; Stillman, David E

    2015-01-01

    The flow of ice is temperature-dependent, but direct measurements of englacial temperature are sparse. The dielectric attenuation of radio waves through ice is also temperature-dependent, and radar sounding of ice sheets is sensitive to this attenuation. Here we estimate depth-averaged radar-attenuation rates within the Greenland Ice Sheet from airborne radar-sounding data and its associated radiostratigraphy. Using existing empirical relationships between temperature, chemistry, and radar attenuation, we then infer the depth-averaged englacial temperature. The dated radiostratigraphy permits a correction for the confounding effect of spatially varying ice chemistry. Where radar transects intersect boreholes, radar-inferred temperature is consistently higher than that measured directly. We attribute this discrepancy to the poorly recognized frequency dependence of the radar-attenuation rate and correct for this effect empirically, resulting in a robust relationship between radar-inferred and borehole-measured depth-averaged temperature. Radar-inferred englacial temperature is often lower than modern surface temperature and that of a steady state ice-sheet model, particularly in southern Greenland. This pattern suggests that past changes in surface boundary conditions (temperature and accumulation rate) affect the ice sheet's present temperature structure over a much larger area than previously recognized. This radar-inferred temperature structure provides a new constraint for thermomechanical models of the Greenland Ice Sheet.

  20. Towards a Global Upper Mantle Attenuation Model

    NASA Astrophysics Data System (ADS)

    Karaoglu, Haydar; Romanowicz, Barbara

    2015-04-01

    Global anelastic tomography is crucial for addressing the nature of heterogeneity in the Earth's interior. The intrinsic attenuation manifests itself through dispersion and amplitude decay. These are contaminated by elastic effects such as (de)focusing and scattering. Therefore, mapping anelasticity accurately requires separation of elastic effects from the anelastic ones. To achieve this, a possible approach is to try and first predict elastic effects through the computation of seismic waveforms in a high resolution 3D elastic model, which can now be achieved accurately using numerical wavefield computations. Building upon the recent construction of such a whole mantle elastic and radially anisotropic shear velocity model (SEMUCB_WM1, French and Romanowicz, 2014), which will be used as starting model, our goal is to develop a higher resolution 3D attenuation model of the upper mantle based on full waveform inversion. As in the development of SEMUCB_WM1, forward modeling will be performed using the spectral element method, while the inverse problem will be treated approximately, using normal mode asymptotics. Both fundamental and overtone time domain long period waveforms (T>60s) will be used from a dataset of over 200 events observed at several hundred stations globally. Here we present preliminary results of synthetic tests, exploring different iterative inversion strategies.

  1. Attenuation of laser generated ultrasound in steel at high temperatures; comparison of theory and experimental measurements.

    PubMed

    Kube, Christopher M

    2016-08-01

    This article reexamines some recently published laser ultrasound measurements of the longitudinal attenuation coefficient obtained during annealing of two steel samples (DP600 and S550). Theoretical attenuation models based on perturbation theory are compared to these experimental measurements. It is observed that the Rayleigh attenuation formulas provide the correct qualitative agreement, but overestimate the experimental values. The more general theoretical attenuation model considered here demonstrates strong quantitative agreement, which highlights the applicability of the model during real-time metal processing. PMID:27235777

  2. Chopping-Wheel Optical Attenuator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    1988-01-01

    Star-shaped rotating chopping wheel provides adjustable time-averaged attenuation of narrow beam of light without changing length of optical path or spectral distribution of light. Duty cycle or attenuation factor of chopped beam controlled by adjusting radius at which beam intersects wheel. Attenuation factor independent of wavelength. Useful in systems in which chopping frequency above frequency-response limits of photodetectors receiving chopped light. Used in systems using synchronous detection with lock-in amplifiers.

  3. Ultrasonic Attenuation in Zircaloy-4

    SciTech Connect

    Gomez, M.P.; Banchik, A.D.; Lopez Pumarega, M.I.; Ruzzante, J.E.

    2005-04-09

    In this work the relationship between Zircaloy-4 grain size and ultrasonic attenuation behavior was studied for longitudinal waves in the frequency range of 10-90 MHz. The attenuation was analyzed as a function of frequency for samples with different mechanical and heat treatments having recrystallized and Widmanstatten structures with different grain size. The attenuation behavior was analyzed by different scattering models, depending on grain size, wavelength and frequency.

  4. Attenuation compensation in TC-99M SPECT brain imaging: Use of attenuation maps derived from tranmission versus emission data

    SciTech Connect

    Pan, T.S.; Licho, R.; Penney, B.C.

    1994-05-01

    This study compares reconstructions of Tc-99m brain SPECT studies made using two methods of estimating the attenuation map: (1) transmission scanning, and (2) segmenting reconstructions of emission data and assigning attenuation coefficient values. A three-head SPECT system with fan beam collimators was used. Transmission scanning was performed using a line source at the focal line of a fan beam collimator right after the regular emission scan. The higher attenuation of the skull and the lower attenuation in the sinus cavities were identifiable despite the noise in the reconstructed transmission data due to: (1) the contamination of the transmission data by emission photons, (2) the maximum acquisition count rate imposed by the SPECT system, and (3) the clinical scanning time. Emission data were recorded using both photopeak and Compton scatter energy windows. Outlines of the head and the maxillary sinus could be obtained using only the Compton scatter reconstructions, whereas identifying the skull regions and the frontal sinus required the photopeak data as well. We placed appropriate linear attenuation coefficients in the soft tissue, bone, sinus and air regions (0.15,. 0.22, 0, and 0 cm{sup -1}) and blurred this attenuation map with a Gaussian kernel of about 0.2 cm standard deviation to obtain the attenuation map based on the emission data. Reconstructions were computed using the maximum likelihood expectation maximization algorithm with Siddon`s ray-tracing algorithm. Reconstructions based on the two attenuation maps were compared quantitatively on the patient data. The differences noted were quite small. These results imply that attenuation correction based on emission data alone may be adequate for Tc-99m SPECT brain imaging.

  5. Atmospheric extinction in solar tower plants: absorption and broadband correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-08-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrated solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in ray-tracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested, and more than 19 months of measurements were collected and compared at the Plataforma Solar de Almería. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for concentrated solar power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the time-dependent solar spectrum which is reflected by the collector. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the absorption and broadband correction (ABC) procedure, additional

  6. Lg Attenuation of the Western United States

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Ranasinghe, N. R.; Ni, J.; Sandvol, E. A.

    2014-12-01

    Lg waveforms recorded by EarthScope's Transportable Array (TA) are used to estimate Lg Q in the Western United States (WUS). Attenuation is calculated based on Lg spectral amplitudes filtered at a narrow band from 0.5 to 1.5 Hz with a central frequency of 1 Hz. The two-station and reverse two-station techniques were used to calculate Qo values. 398 events occurring from 2005 to 2009 and ranging from magnitude 3 to magnitude 6 were used in this study. The geometric spreading term can be determined by using a three-dimensional linear fit of the amplitude ratios versus epicentral distances to two stations. The slope of this line provides the geometric spreading term we use to calculate Lg Qo values of WUS. The results show high Q regions (low attenuation) corresponding to the Colorado Plateau (CP), the Rocky Mountains (RM), the Columbia Plateau (COP), and the Sierra Nevada Mountains (SNM). Regions of low Q (high attenuation) are seen along the Snake River Plain (SRP), the Rio Grande Rift (RGR), the Cascade Mountains (CM), and in east and west of the Basin and Range (BR) where tectonic activity is more active than the central part of the BR. A positive correlation between high heat flow, recent tectonic activity and Q was observed. Areas with low heat flow, thin sediment cover, and no recent tectonic activity were observed to have consistently high Q. These new models use two-station and reversed two-station methods and provide a comparison with previous studies and better constrain regions with high attenuation. This increase in detail can improve high frequency ground motion predictions of future large earthquakes for more accurate hazard assessment and improve overall understanding of the structure and assemblage of the WUS.

  7. Surprisingly low frequency attenuation effects in long tubes when measuring turbulent fluxes at tall towers

    NASA Astrophysics Data System (ADS)

    Ibrom, Andreas; Brændholt, Andreas; Pilegaard, Kim

    2016-04-01

    The eddy covariance technique relies on the fast and accurate measurement of gas concentration fluctuations. While for some gasses robust and compact sensors are available, measurement of, e.g., non CO2 greenhouse gas fluxes is often performed with sensitive equipment that cannot be run on a tower without massively disturbing the wind field. To measure CO and N2O fluxes, we installed an eddy covariance system at a 125 m mast, where the gas analyser was kept in a laboratory close to the tower and the sampling was performed using a 150 m long tube with a gas intake at 96 m height. We investigated the frequency attenuation and the time lag of the N2O and CO concentration measurements with a concentration step experiment. The results showed surprisingly high cut-off frequencies (close to 2 Hz) and small low-pass filter induced time lags (< 0.3 s), which were similar for CO and N2O. The results indicate that the concentration signal was hardly biased during the ca 10 s travel through the tube. Due to the larger turbulence time scales at large measurement heights the low-pass correction was for the majority of the measurements < 5%. For water vapour the tube attenuation was massive, which had, however, a positive effect by reducing both the water vapour dilution correction and the cross sensitivity effects on the N2O and CO flux measurements. Here we present the set-up of the concentration step change experiment and its results and compare them with recently developed theories for the behaviour of gases in turbulent tube flows.

  8. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging

    NASA Astrophysics Data System (ADS)

    Ladefoged, Claes N.; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E.; Andersen, Flemming L.

    2015-10-01

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [18F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R2* values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within  ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.

  9. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging.

    PubMed

    Ladefoged, Claes N; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E; Andersen, Flemming L

    2015-10-21

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [(18)F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R(*)2 values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within  ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers. PMID:26422177

  10. LINE-ABOVE-GROUND ATTENUATOR

    DOEpatents

    Wilds, R.B.; Ames, J.R.

    1957-09-24

    The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.

  11. A new technique to characterize CT scanner bow-tie filter attenuation and applications in human cadaver dosimetry simulations

    SciTech Connect

    Li, Xinhua; Shi, Jim Q.; Zhang, Da; Singh, Sarabjeet; Padole, Atul; Otrakji, Alexi; Kalra, Mannudeep K.; Liu, Bob; Xu, X. George

    2015-11-15

    Purpose: To present a noninvasive technique for directly measuring the CT bow-tie filter attenuation with a linear array x-ray detector. Methods: A scintillator based x-ray detector of 384 pixels, 307 mm active length, and fast data acquisition (model X-Scan 0.8c4-307, Detection Technology, FI-91100 Ii, Finland) was used to simultaneously detect radiation levels across a scan field-of-view. The sampling time was as short as 0.24 ms. To measure the body bow-tie attenuation on a GE Lightspeed Pro 16 CT scanner, the x-ray tube was parked at the 12 o’clock position, and the detector was centered in the scan field at the isocenter height. Two radiation exposures were made with and without the bow-tie in the beam path. Each readout signal was corrected for the detector background offset and signal-level related nonlinear gain, and the ratio of the two exposures gave the bow-tie attenuation. The results were used in the GEANT4 based simulations of the point doses measured using six thimble chambers placed in a human cadaver with abdomen/pelvis CT scans at 100 or 120 kV, helical pitch at 1.375, constant or variable tube current, and distinct x-ray tube starting angles. Results: Absolute attenuation was measured with the body bow-tie scanned at 80–140 kV. For 24 doses measured in six organs of the cadaver, the median or maximum difference between the simulation results and the measurements on the CT scanner was 8.9% or 25.9%, respectively. Conclusions: The described method allows fast and accurate bow-tie filter characterization.

  12. 2001 Bhuj, India, earthquake engineering seismoscope recordings and Eastern North America ground-motion attenuation relations

    USGS Publications Warehouse

    Cramer, C.H.; Kumar, A.

    2003-01-01

    Engineering seismoscope data collected at distances less than 300 km for the M 7.7 Bhuj, India, mainshock are compatible with ground-motion attenuation in eastern North America (ENA). The mainshock ground-motion data have been corrected to a common geological site condition using the factors of Joyner and Boore (2000) and a classification scheme of Quaternary or Tertiary sediments or rock. We then compare these data to ENA ground-motion attenuation relations. Despite uncertainties in recording method, geological site corrections, common tectonic setting, and the amount of regional seismic attenuation, the corrected Bhuj dataset agrees with the collective predictions by ENA ground-motion attenuation relations within a factor of 2. This level of agreement is within the dataset uncertainties and the normal variance for recorded earthquake ground motions.

  13. An attenuated philosophical gentleman.

    PubMed

    Christie, John R R

    2014-06-20

    Dr. Joseph Black had at one time, a house near us to the west. He was a striking and beautiful person; tall, very thin, and cadaverously pale; his hair carefully powdered, though there was little of it except what was collected in a long thin queue; his eyes dark, clear and large, like deep pools of pure water. He wore black speckless clothes, silk stockings, silver buckles, and either a slim green umbrella, or a genteel brown cane. The general frame and air were feeble and slender. The wildest boy respected Black. No lad could be irreverent toward a man so pale, so gentle, so elegant and so illustrious. So he glided, like a spirit, through our rather mischievous sportiveness, unharmed. He died seated, with a bowl of milk upon his knee, of which his ceasing to be did not spill a drop; a departure which it seemed, after the event, might have been foretold of this attenuated philosophical gentleman. PMID:24921110

  14. Fiber optic attenuator

    NASA Technical Reports Server (NTRS)

    Buzzetti, Mike F. (Inventor)

    1994-01-01

    A fiber optic attenuator of the invention is a mandrel structure through which a bundle of optical fibers is wrapped around in a complete circle. The mandrel structure includes a flexible cylindrical sheath through which the bundle passes. A set screw on the mandrel structure impacts one side of the sheath against two posts on the opposite side of the sheath. By rotating the screw, the sheath is deformed to extend partially between the two posts, bending the fiber optic bundle to a small radius controlled by rotating the set screw. Bending the fiber optic bundle to a small radius causes light in each optical fiber to be lost in the cladding, the amount depending upon the radius about which the bundle is bent.

  15. Fast self-attenuation determination of low energy gamma lines.

    PubMed

    Haddad, Kh

    2016-09-01

    Linear correlation between self-attenuation factor of 46.5keV ((210)Pb) and the 1764keV, 46.5 counts ratio has been developed in this work using triple superphosphate fertilizer samples. Similar correlation has been also developed for 63.3keV ((238)U). This correlation offers simple, fast, and accurate technique for self-attenuation determination of low energy gamma lines. Utilization of 46.5keV in the ratio has remarkably improved the technique sensitivity in comparison with other work, which used similar concept. The obtained results were used to assess the validity of transmission technique. PMID:27337648

  16. Assessment of the efficiency of long-range corrected functionals for some properties of large compounds

    NASA Astrophysics Data System (ADS)

    Jacquemin, Denis; Perpète, Eric A.; Scalmani, Giovanni; Frisch, Michael J.; Kobayashi, Rika; Adamo, Carlo

    2007-04-01

    Using the long-range correction (LC) density functional theory (DFT) scheme introduced by Iikura et al. [J. Chem. Phys. 115, 3540 (2001)] and the Coulomb-attenuating model (CAM-B3LYP) of Yanai et al. [Chem. Phys. Lett. 393, 51 (2004)], we have calculated a series of properties that are known to be poorly reproduced by standard functionals: Bond length alternation of π-conjugated polymers, polarizabilities of delocalized chains, and electronic spectra of extended dyes. For each of these properties, we present cases in which traditional hybrid functionals do provide accurate results and cases in which they fail to reproduce the correct trends. The quality of the results is assessed with regard to experimental values and/or data arising from electron-correlated wave function approaches. It turns out that (i) both LC-DFT and CAM-B3LYP provide an accurate bond length alternation for polyacetylene and polymethineimine, although for the latter they decrease slightly too rapidly with chain length. (ii) The LC generalized gradient approximation and MP2 polarizabilities of long polyphosphazene and polymethineimine oligomers agree almost perfectly. In the same way, CAM-B3LYP corrects the major part of the B3LYP faults. (iii) LC and CAM techniques do not help in correcting the nonrealistic evolution with chain length of the absorption wavelengths of cyanine derivatives. In addition, though both schemes significantly overestimate the ground to excited state transition energy of substituted anthraquinone dyes, they provide a more consistent picture once a statistical treatment is performed than do traditional hybrid functionals.

  17. Computation of atmospheric attenuation of sound for fractional-octave bands

    NASA Technical Reports Server (NTRS)

    Montegani, F. J.

    1979-01-01

    Correct methods of accounting for atmospheric attenuation in band data requiring consideration of the integrated effect across the bands of the specific distance involved are discussed. Computer programs are provided that are understandable, efficient, and simple to use. It is hoped that this will facilitate more widespread use of correct computational methods, especially where routine computer processing of data is employed.

  18. Patient position alters attenuation effects in multipinhole cardiac SPECT

    SciTech Connect

    Timmins, Rachel; Ruddy, Terrence D.; Wells, R. Glenn

    2015-03-15

    position-dependent changes were removed with attenuation correction. Conclusions: Translation of a source relative to a multipinhole camera caused only small changes in homogeneous phantoms with SPS changing <1.5. Inhomogeneous attenuating media cause much larger changes to occur when the source is translated. Changes in SPS of up to six were seen in an anthropomorphic phantom for axial translations. Attenuation correction removes the position-dependent changes in attenuation.

  19. Suicide Risk: Amplifiers and Attenuators.

    ERIC Educational Resources Information Center

    Plutchik, Robert; Van Praag, Herman M.

    1994-01-01

    Attempts to integrate findings on correlates of suicide and violent risk in terms of a theory called a two-stage model of countervailing forces, which assumes that the strength of aggressive impulses is modified by amplifiers and attenuators. The vectorial interaction of amplifiers and attenuators creates an unstable equilibrium making prediction…

  20. Adjustable Optical-Fiber Attenuator

    NASA Technical Reports Server (NTRS)

    Buzzetti, Mike F.

    1994-01-01

    Adjustable fiber-optic attenuator utilizes bending loss to reduce strength of light transmitted along it. Attenuator functions without introducing measurable back-reflection or insertion loss. Relatively insensitive to vibration and changes in temperature. Potential applications include cable television, telephone networks, other signal-distribution networks, and laboratory instrumentation.

  1. Estimation of canopy attenuation for active/passive microwave soil moisture retrieval algorithms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper discusses the importance of the proper characterization of scattering and attenuation in trees needed for accurate retrieval of soil moisture in the presence of trees. Emphasis is placed on determining an accurate estimation of the propagation properties of a vegetation canopy using the c...

  2. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  3. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  4. K-corrections and extinction corrections for Type Ia supernovae

    SciTech Connect

    Nugent, Peter; Kim, Alex; Perlmutter, Saul

    2002-05-21

    The measurement of the cosmological parameters from Type Ia supernovae hinges on our ability to compare nearby and distant supernovae accurately. Here we present an advance on a method for performing generalized K-corrections for Type Ia supernovae which allows us to compare these objects from the UV to near-IR over the redshift range 0 < z < 2. We discuss the errors currently associated with this method and how future data can improve upon it significantly. We also examine the effects of reddening on the K-corrections and the light curves of Type Ia supernovae. Finally, we provide a few examples of how these techniques affect our current understanding of a sample of both nearby and distant supernovae.

  5. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    SciTech Connect

    Narita, Y. |; Eberl, S.; Nakamura, T.

    1996-12-31

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  6. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  7. Accurate and Precise Zinc Isotope Ratio Measurements in Urban Aerosols

    NASA Astrophysics Data System (ADS)

    Weiss, D.; Gioia, S. M. C. L.; Coles, B.; Arnold, T.; Babinski, M.

    2009-04-01

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of δ66Zn determinations in aerosols is around 0.05 per mil per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in δ66Zn ranging between -0.96 and -0.37 per mil in coarse and between -1.04 and 0.02 per mil in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source.

  8. Evaluation of laminated composite structures using ultrasonic attenuation measurement

    NASA Astrophysics Data System (ADS)

    Shen, Peitao; Houghton, J. R.

    The existence of delamination and porosity in laminated composite structures will degrade the strength of the structures. The detection of delamination can be easily obtained using ultrasonic C-scan or A-scan methods. But, the detection of porosity in laminated structures has been a difficult task for years, especially in production condition. This paper will analytically evaluate the current techniques used in industry, and develop accurate attenuation measurement methods for the evaluation of porosity. The test samples, which are used in the laminated structures of the German airbus by Textron Aerostructures, Inc, will be tested using ultrasonic C-scan and grid-based A-scan methods. The digitized waveforms are stored and analyzed using different attenuation measurement algorithms. The volume of porosity is calculated using digital imaging analysis. Finally, the correlation between ultrasonic attenuation and the volume fraction of porosity are calculated and analyzed.

  9. Attenuation of Vaccinia Virus.

    PubMed

    Yakubitskiy, S N; Kolosova, I V; Maksyutov, R A; Shchelkunov, S N

    2015-01-01

    Since 1980, in the post-smallpox vaccination era the human population has become increasingly susceptible compared to a generation ago to not only the variola (smallpox) virus, but also other zoonotic orthopoxviruses. The need for safer vaccines against orthopoxviruses is even greater now. The Lister vaccine strain (LIVP) of vaccinia virus was used as a parental virus for generating a recombinant 1421ABJCN clone defective in five virulence genes encoding hemagglutinin (A56R), the IFN-γ-binding protein (B8R), thymidine kinase (J2R), the complement-binding protein (C3L), and the Bcl-2-like inhibitor of apoptosis (N1L). We found that disruption of these loci does not affect replication in mammalian cell cultures. The isogenic recombinant strain 1421ABJCN exhibits a reduced inflammatory response and attenuated neurovirulence relative to LIVP. Virus titers of 1421ABJCN were 3 lg lower versus the parent VACV LIVP when administered by the intracerebral route in new-born mice. In a subcutaneous mouse model, 1421ABJCN displayed levels of VACV-neutralizing antibodies comparable to those of LIVP and conferred protective immunity against lethal challenge by the ectromelia virus. The VACV mutant holds promise as a safe live vaccine strain for preventing smallpox and other orthopoxvirus infections. PMID:26798498

  10. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  11. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899... . SUPPLEMENTARY INFORMATION: On July 6, 2012 (77 FR 39899), the NRC published a final rule in the Federal Register... typographical and spelling errors, and making other edits and conforming changes. This correcting amendment...

  12. Rx for Pedagogical Correctness: Professional Correctness.

    ERIC Educational Resources Information Center

    Lasley, Thomas J.

    1993-01-01

    Describes the difficulties caused by educators holding to a view of teaching that assumes that there is one "pedagogically correct" way of running a classroom. Provides three examples of harmful pedagogical correctness ("untracked" classes, cooperative learning, and testing and test-wiseness). Argues that such dogmatic views of education limit…

  13. Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…

  14. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Gary Mavko; Jack Dvorkin

    2002-04-01

    Wave-induced variations of pore pressure in a partially-saturated reservoir result in oscillatory liquid flow. The viscous losses during this flow are responsible for wave attenuation. The same viscous effects determine the changes in the dynamic bulk modulus of the system versus frequency. These changes are necessarily linked to attenuation via the causality condition. We analytically quantify the frequency dependence of the bulk modulus of a partially saturated rock by assuming that saturation is patchy and then link these changes to the inverse quality factor. As a result, the P-wave attenuation is quantitatively linked to saturation and thus can serve as a saturation indicator.

  15. Surface-based partial-volume correction for high-resolution PET.

    PubMed

    Funck, Thomas; Paquette, Caroline; Evans, Alan; Thiel, Alexander

    2014-11-15

    Tissue radioactivity concentrations, measured with positron emission tomography (PET) are subject to partial volume effects (PVE) due to the limited spatial resolution of the scanner. Last generation high-resolution PET cameras with a full width at half maximum (FWHM) of 2-4mm are less prone to PVEs than previous generations. Corrections for PVEs are still necessary, especially when studying small brain stem nuclei or small variations in cortical neuroreceptor concentrations which may be related to cytoarchitectonic differences. Although several partial-volume correction (PVC) algorithms exist, these are frequently based on a priori assumptions about tracer distribution or only yield corrected values of regional activity concentrations without providing PVE corrected images. We developed a new iterative deconvolution algorithm (idSURF) for PVC of PET images that aims to overcome these limitations by using two innovative techniques: 1) the incorporation of anatomic information from a cortical gray matter surface representation, extracted from magnetic resonance imaging (MRI) and 2) the use of anatomically constrained filtering to attenuate noise. PVE corrected images were generated with idSURF implemented into a non-interactive processing pipeline. idSURF was validated using simulated and clinical PET data sets and compared to a frequently used standard PVC method (Geometric Transfer Matrix: GTM). The results on simulated data sets show that idSURF consistently recovers accurate radiotracer concentrations within 1-5% of true values. Both radiotracer concentrations and non-displaceable binding potential (BPnd) values derived from clinical PET data sets with idSURF were highly correlated with those obtained with the standard PVC method (R(2) = 0.99, error = 0%-3.2%). These results suggest that idSURF is a valid and potentially clinically useful PVC method for automatic processing of large numbers of PET data sets. PMID:25175542

  16. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    ERIC Educational Resources Information Center

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  17. Optimization of a Model Corrected Blood Input Function from Dynamic FDG-PET Images of Small Animal Heart In Vivo.

    PubMed

    Zhong, Min; Kundu, Bijoy K

    2013-10-01

    Quantitative evaluation of dynamic Positron Emission Tomography (PET) of mouse heart in vivo is challenging due to the small size of the heart and limited intrinsic spatial resolution of the PET scanner. Here, we optimized a compartment model which can simultaneously correct for spill over and partial volume effects for both blood pool and the myocardium, compute kinetic rate parameters and generate model corrected blood input function (MCBIF) from ordered subset expectation maximization - maximum a posteriori (OSEM-MAP) cardiac and respiratory gated (18)F-FDG PET images of mouse heart with attenuation correction in vivo, without any invasive blood sampling. Arterial blood samples were collected from a single mouse to indicate the feasibility of the proposed method. In order to establish statistical significance, venous blood samples from n=6 mice were obtained at 2 late time points, when SP contamination from the tissue to the blood is maximum. We observed that correct bounds and initial guesses for the PV and SP coefficients accurately model the wash-in and wash-out dynamics of the tracer from mouse blood. The residual plot indicated an average difference of about 1.7% between the blood samples and MCBIF. The downstream rate of myocardial FDG influx constant, Ki (0.15±0.03 min(-1)), compared well with Ki obtained from arterial blood samples (P=0.716). In conclusion, the proposed methodology is not only quantitative but also reproducible. PMID:24741130

  18. Attenuation of Shocks through Porous Media

    NASA Astrophysics Data System (ADS)

    Lind, Charles A.; Cybyk, Bohdan Z.; Boris, Jay P.

    1998-11-01

    Structures designed to mitigate the effects of blast and shock waves are important for both accidental and controlled explosions. The net effect of these mitigating structures is to reduce the strength of the transmitted shock thereby reducing the dynamic pressure loading on nearby objects. In the present study, the attenuation of planar blast and shock waves by passage through structured media is numerically studied with the FAST3D model. The FAST3D model is a state-of-the-art, portable, three-dimensional computational fluid dynamics model based on Flux-Corrected Transport and uses the Virtual Cell Embedding algorithm for simulating complex geometries. The effects of media placement, spacing, orientation, and area blockage are parametrically studied to enhance the understanding of the complex processes involved and to determine ways to minimize the adverse effects of these blast waves.

  19. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  20. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  1. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    SciTech Connect

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECT images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime

  2. Repetition priming results in sensitivity attenuation

    PubMed Central

    Allenmark, Fredrik; Hsu, Yi-Fang; Roussel, Cedric; Waszak, Florian

    2015-01-01

    Repetition priming refers to the change in the ability to perform a task on a stimulus as a consequence of a former encounter with that very same item. Usually, repetition results in faster and more accurate performance. In the present study, we used a contrast discrimination protocol to assess perceptual sensitivity and response bias of Gabor gratings that are either repeated (same orientation) or alternated (different orientation). We observed that contrast discrimination performance is worse, not better, for repeated than for alternated stimuli. In a second experiment, we varied the probability of stimulus repetition, thus testing whether the repetition effect is due to bottom-up or top-down factors. We found that it is top-down expectation that determines the effect. We discuss the implication of these findings for repetition priming and related phenomena as sensory attenuation. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25819554

  3. First results from a prototype dynamic attenuator system

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Peng, Mark V.; May, Christopher A.; Shunhavanich, Picha; Pelc, Norbert J.

    2015-03-01

    The dynamic, piecewise-linear attenuator has been proposed as a concept which can shape the radiation flux incident on the patient. By reducing the signal to photon-rich measurements and increasing the signal to photon-starved measurements, the piecewise-linear attenuator has been shown to improve dynamic range, scatter, and variance and dose metrics in simulation. The piecewise-linear nature of the proposed attenuator has been hypothesized to mitigate artifacts at transitions by eliminating jump discontinuities in attenuator thickness at these points. We report the results of a prototype implementation of this concept. The attenuator was constructed using rapid prototyping technologies and was affixed to a tabletop x-ray system. Images of several sections of an anthropormophic pediatric phantom were produced and compared to those of the same system with uniform illumination. The thickness of the illuminated slab was limited by beam collimation and an analytic water beam hardening correction was used for both systems. Initial results are encouraging and show improved image quality, reduced dose and low artifact levels.

  4. Determination of the tissue inhomogeneity correction in high dose rate Brachytherapy for Iridium-192 source

    PubMed Central

    Ravikumar, Barlanka; Lakshminarayana, S.

    2012-01-01

    In Brachytherapy treatment planning, the effects of tissue heterogeneities are commonly neglected due to lack of accurate, general and fast three-dimensional (3D) dose-computational algorithms. In performing dose calculations, it is assumed that the tumor and surrounding tissues constitute a uniform, homogeneous medium equivalent to water. In the recent past, three-dimensional computed tomography (3D-CT) based treatment planning for Brachytherapy applications has been popularly adopted. However, most of the current commercially available planning systems do not provide the heterogeneity corrections for Brachytherapy dosimetry. In the present study, we have measured and quantified the impact of inhomogeneity caused by different tissues with a 0.015 cc ion chamber. Measurements were carried out in wax phantom which was employed to measure the heterogeneity. Iridium-192 (192Ir) source from high dose rate (HDR) Brachytherapy machine was used as the radiation source. The reduction of dose due to tissue inhomogeneity was measured as the ratio of dose measured with different types of inhomogeneity (bone, spleen, liver, muscle and lung) to dose measured with homogeneous medium for different distances. It was observed that different tissues attenuate differently, with bone tissue showing maximum attenuation value and lung tissue resulting minimum value and rest of the tissues giving values lying in between those of bone and lung. It was also found that inhomogeneity at short distance is considerably more than that at larger distances. PMID:22363109

  5. Nuclear-interaction correction of integrated depth dose in carbon-ion radiotherapy treatment planning.

    PubMed

    Inaniwa, T; Kanematsu, N; Hara, Y; Furukawa, T

    2015-01-01

    In treatment planning of charged-particle therapy, tissue heterogeneity is conventionally modeled as water with various densities, i.e. stopping effective densities ρ(S), and the integrated depth dose measured in water (IDD) is applied accordingly for the patient dose calculation. Since the chemical composition of body tissues is different from that of water, this approximation causes dose calculation errors, especially due to difference in nuclear interactions. Here, we propose and validate an IDD correction method for these errors in patient dose calculations. For accurate handling of nuclear interactions, ρ(S) of the patient is converted to nuclear effective density ρ(N), defined as the ratio of the probability of nuclear interactions in the tissue to that in water using a recently formulated semi-empirical relationship between the two. The attenuation correction factor Φ(w)(p), defined as the ratio of the attenuation of primary carbon ions in a patient to that in water, is calculated from a linear integration of ρ(N) along the beam path. In our treatment planning system, a carbon-ion beam is modeled to be composed of three components according to their transverse beam sizes: primary carbon ions, heavier fragments, and lighter fragments. We corrected the dose contribution from primary carbon ions to IDD as proportional to Φ(w)(p), and corrected that from lighter fragments as inversely proportional to Φ(w)(p). We tested the correction method for some non-water materials, e.g. milk, lard, ethanol and water solution of potassium phosphate (K2HPO4), with un-scanned and scanned carbon-ion beams. In un-scanned beams, the difference in IDD between a beam penetrating a 150 mm-thick layer of lard and a beam penetrating water of the corresponding thickness amounted to -4%, while it was +6% for a 150 mm-thick layer of 40% K2HPO4. The observed differences were accurately predicted by the correction method. The corrected IDDs agreed with the measurements within

  6. Eyeglasses for Vision Correction

    MedlinePlus

    ... Stories Español Eye Health / Glasses & Contacts Eyeglasses for Vision Correction Dec. 12, 2015 Wearing eyeglasses is an easy way to correct refractive errors. Improving your vision with eyeglasses offers the opportunity to select from ...

  7. Reduction of TGS image reconstruction times using separable attenuation coefficient models

    SciTech Connect

    Estep, R.J.; Prettyman, T.H.; Sheppard, G.A.

    1995-12-31

    The tomographic gamma scanner (TGS) method for assaying transuranic and low-level waste produces low-resolution {open_quotes}density{close_quotes} images of 208-l waste drums at two or more transmission gamma-ray energies and uses these to make detailed attenuation corrections at neighboring emission gamma-ray energies. For example, we have used the 136-, 285-, and 401-keV lines from a {sup 75}Se transmission source to correct for attenuation of the 129-, 203-, 345-, and 414-keV lines in {sup 239}Pu assays. The list can expand to 20 or more emission energies when performing multiple-isotope assays. Methods for projecting attenuation images from transmission to emission energies were recently discussed with emphasis on the problems encountered when the opacity of a sample leads to poor counting statistics. This report focuses on increases in computational speed that can be attained by using separable attenuation coefficient models.

  8. Accurate skin dose measurements using radiochromic film in clinical applications

    SciTech Connect

    Devic, S.; Seuntjens, J.; Abdel-Rahman, W.; Evans, M.; Olivares, M.; Podgorsak, E.B.; Vuong, Te; Soares, Christopher G.

    2006-04-15

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 {mu}. We used the new GAFCHROMIC[reg] dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 {mu}. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 {mu} to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10x10 cm{sup 2} increases from 14% to 43%. For the three GAFCHROMIC[reg] dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC[reg] films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC[reg] film model. Finally, a procedure that uses EBT model GAFCHROMIC[reg] film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  9. Research in Correctional Rehabilitation.

    ERIC Educational Resources Information Center

    Rehabilitation Services Administration (DHEW), Washington, DC.

    Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…

  10. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  11. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. Ultrasonic Attenuation Results of Thermoplastic Resin Composites Undergoing Thermal and Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Madaras, Eric I.

    1998-01-01

    As part of an effort to obtain the required information about new composites for aviation use, materials and NDE researchers at NASA are jointly performing mechanical and NDE measurements on new composite materials. The materials testing laboratory at NASA is equipped with environmental chambers mounted on load frames that can expose composite materials to thermal and loading cycles representative of flight protocols. Applying both temperature and load simultaneously will help to highlight temperature and load interactions during the aging of these composite materials. This report highlights our initial ultrasonic attenuation results from thermoplastic composite samples that have undergone over 4000 flight cycles to date. Ultrasonic attenuation measurements are a standard method used to assess the effects of material degradation. Recently, researchers have shown that they could obtain adequate contrast in the evaluation of thermal degradation in thermoplastic composites by using frequencies of ultrasound on the order of 24 MHz. In this study, we address the relationship of attenuation measured at lower frequencies in thermoplastic composites undergoing both thermal and mechanical loading. We also compare these thermoplastic results with some data from thermoset composites undergoing similar protocols. The composite s attenuation is reported as the slope of attenuation with respect to frequency, defined as b = Da(f)/Df. The slope of attenuation is an attractive parameter since it is quantitative, yet does not require interface corrections like conventional quantitative attenuation measurements. This latter feature is a consequence of the assumption that interface correction terms are frequency independent. Uncertainty in those correction terms can compromise the value of conventional quantitative attenuation data. Furthermore, the slope of the attenuation more directly utilizes the bandwidth information and in addition, the bandwidth can be adjusted in the post

  18. Removing attenuation effects in reflectivity images at 33 and 95 GHz

    NASA Astrophysics Data System (ADS)

    Lohmeier, Stephen P.; Sekelsky, Stephen M.; Firda, John M.

    1997-09-01

    Reflectivity is a fundamental parameter for sensing the morphology and composition of clouds and precipitation. However, attenuation due to varying amounts of precipitation, clouds, and water vapor along the propagation path corrupts reflectivity estimates. In this paper, an algorithm to correct for these effects at 33 and 95 GHz is proposed. This algorithm is then applied to corrupted reflectivity images collected with the University of Massachusetts Microwave Remote Sensing Laboratory (MIRSL) Cloud Profiling Radar System (CPRS), which is a dual-frequency (33 and 95 GHz) , fully-polarimetric, pulse-Doppler, ground-based radar. The attenuation correction algorithm consists of two steps. First, different sources of attenuation along the propagation path are identified by classifying each image into regions of: air, ice particles, liquid droplets, rain, mixed-phase particles, and insects. This is accomplished with a rule-based classifier that relies on collocated measurements of velocity, linear depolarization ratio, and height to make classification decisions. The second step is correcting attenuation along the propagation path in a region appropriate manner. By starting at the ground with the assumption that the reflectivity estimate is unattenuated, and working away from the radar adding a region-appropriate amount to the reflectivity estimate at each range gate, attenuation effects in the image can be largely removed. However, if a mixed-phase region where the rate of attenuation is unknown is encountered along the propagation path, the correction is suspended and an alternative approach that corrects attenuation from the top of the cloud down is used. The complete algorithm was applied to the CPRS data and significantly improved reflectivity estimates.

  19. Attenuator design for organs at risk in total body irradiation using a translation technique

    SciTech Connect

    Lavallee, Marie-Claude; Aubin, Sylviane; Chretien, Mario; Larochelle, Marie; Beaulieu, Luc

    2008-05-15

    Total body irradiation (TBI) is an efficient part of the treatment for malignant hematological diseases. Dynamic TBI techniques provide great advantages (e.g., dose homogeneity, patient comfort) while overcoming treatment room space restrictions. However, with dynamic techniques come additional organs at risk (OAR) protection challenges. In most dynamic TBI techniques, lead attenuators are used to diminish the dose received by the OARs. The purpose of this study was to characterize the dose deposition under various shapes of attenuators in static and dynamic treatments. This characterization allows for the development of a correction method to improve attenuator design in dynamic treatments. The dose deposition under attenuators at different depths in dynamic treatment was compared with the static situation based on two definitions: the coverage areas and the penumbra regions. The coverage area decreases with depth in dynamic treatment while it is stable for the static situation. The penumbra increases with depth in both treatment modes, but the increasing rate is higher in the dynamic situation. Since the attenuator coverage is deficient in the dynamic treatment mode, a correction method was developed to modify the attenuator design in order to improve the OAR protection. The correction method is divided in two steps. The first step is based on the use of elongation charts, which provide appropriate attenuator coverage and acceptable penumbra for a specific depth. The second point is a correction method for the thoracic inclination, which can introduce an orientation problem in both static and dynamic treatments. This two steps correction method is simple to use and personalized to each patient's anatomy. It can easily be adapted to any dynamic TBI techniques.

  20. Corrective Action Glossary

    SciTech Connect

    Not Available

    1992-07-01

    The glossary of technical terms was prepared to facilitate the use of the Corrective Action Plan (CAP) issued by OSWER on November 14, 1986. The CAP presents model scopes of work for all phases of a corrective action program, including the RCRA Facility Investigation (RFI), Corrective Measures Study (CMS), Corrective Measures Implementation (CMI), and interim measures. The Corrective Action Glossary includes brief definitions of the technical terms used in the CAP and explains how they are used. In addition, expected ranges (where applicable) are provided. Parameters or terms not discussed in the CAP, but commonly associated with site investigations or remediations are also included.

  1. Self-attenuation as a function of gamma ray energy in naturally occurring radioactive material in the oil and gas industry.

    PubMed

    Millsap, D W; Landsberger, S

    2015-03-01

    Self-attenuation correction factors were experimentally determined using radioactive point sources in combination with a subject material of naturally occurring radioactive material (NORM) obtained from oil exploration waste products. The self-attenuation correction factors were taken across a range of gamma ray energies from 41.73 to 1408.0keV. It is noted that the greatest amount of self-attenuation occurs in the energy regime below 200keV and rises to near zero attenuation at higher energies for these types of samples. For the 46.5keV gamma ray of (210)Pb there can be an underestimation of 62%. PMID:25527897

  2. Shuttle program: Computing atmospheric scale height for refraction corrections

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.

  3. Sound attenuation in magnetorheological fluids

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, J.; Elvira, L.; Resa, P.; Montero de Espinosa, F.

    2013-02-01

    In this work, the attenuation of ultrasonic elastic waves propagating through magnetorheological (MR) fluids is analysed as a function of the particle volume fraction and the magnetic field intensity. Non-commercial MR fluids made with iron ferromagnetic particles and two different solvents (an olive oil based solution and an Araldite-epoxy) were used. Particle volume fractions of up to 0.25 were analysed. It is shown that the attenuation of sound depends strongly on the solvent used and the volume fraction. The influence of a magnetic field up to 212 mT was studied and it was found that the sound attenuation increases with the magnetic intensity until saturation is reached. A hysteretic effect is evident once the magnetic field is removed.

  4. Accurate localization of needle entry point in interventional MRI.

    PubMed

    Daanen, V; Coste, E; Sergent, G; Godart, F; Vasseur, C; Rousseau, J

    2000-10-01

    In interventional magnetic resonance imaging (MRI), the systems designed to help the surgeon during biopsy must provide accurate knowledge of the positions of the target and also the entry point of the needle on the skin of the patient. In some cases, this needle entry point can be outside the B(0) homogeneity area, where the distortions may be larger than a few millimeters. In that case, major correction for geometric deformation must be performed. Moreover, the use of markers to highlight the needle entry point is inaccurate. The aim of this study was to establish a three-dimensional coordinate correction according to the position of the entry point of the needle. We also describe a 2-degree of freedom electromechanical device that is used to determine the needle entry point on the patient's skin with a laser spot. PMID:11042649

  5. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-04-01

    In this report we will show results of seismic and well log derived attenuation attributes from a deep water Gulf of Mexico data set. This data was contributed by Burlington Resources and Seitel Inc. The data consists of ten square kilometers of 3D seismic data and three well penetrations. We have computed anomalous seismic absorption attributes on the seismic data and have computed Q from the well log curves. The results show a good correlation between the anomalous absorption (attenuation) attributes and the presence of gas as indicated by well logs.

  6. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  7. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  8. 78 FR 75449 - Miscellaneous Corrections; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... INFORMATION: The NRC published a final rule in the Federal Register on June 7, 2013 (78 FR 34245), to make.... The final rule contained minor errors in grammar, punctuation, and referencing. This document corrects... specifying metric units. The final rule inadvertently included additional errors in grammar and...

  9. Attenuation of peak sound pressure levels of shooting noise by hearing protective earmuffs.

    PubMed

    Lenzuni, Paolo; Sangiorgi, Tommaso; Cerini, Luigi

    2012-01-01

    Transmission losses (TL) to highly impulsive signals generated by three firearms have been measured for two ear muffs, using both a head and torso simulator and a miniature microphone located at the ear canal entrance (MIRE technique). Peak SPL TL have been found to be well approximated by 40 ms short-L eq TL. This has allowed the use of transmissibilities and correction factors for bone conduction and physiological masking appropriate for continuous noise, for the calculation of REAT-type peak insertion losses (IL). Results indicate that peak IL can be well predicted by estimates based on one-third octave band 40 ms short L eq and manufacturer-declared (nominal) IL measured for continuous noise according to test standards. Such predictions tend to be more accurate at the high end of the range, while they are less reliable when the attenuation is lower. A user-friendly simplified prediction algorithm has also been developed, which only requires nominal IL and one-third octave sound exposure level spectra. Separate predictions are possible for IL in direct and diffuse sound fields, albeit with higher uncertainties, due to the smaller number of experimental data comprising the two separate datasets on which such predictions are based. PMID:22718106

  10. 75 FR 68407 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... 67013, the Presidential Determination number should read ``2010-12'' (Presidential Sig.) [FR Doc. C1... Migration Needs Resulting from Violence in Kyrgyzstan Correction In Presidential document...

  11. On prismatic corrections

    NASA Astrophysics Data System (ADS)

    Bartkowski, Zygmunt; Bartkowska, Janina

    2006-02-01

    In the prismatic corrections there are described the differences between the nominal and interior prisms, or tilts of the eye to fix straightforward (Augenausgleichbewegung). In the astigmatic corrections, if the prism doesn't lie in the principal sections of the cylinder, the directions of both events are different. In the corrections of the horizontal strabismus there appears the vertical component of the interior prism. The approximated formulae describing these phenomena are presented. The suitable setting can correct the quality of the vision in the important for the patient direction.

  12. Detailed Study of Seismic Wave Attenuation in Carbonate Rocks: Application on Abu Dhabi Oil Fields

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2015-12-01

    Seismic wave attenuation is a promising attribute for the petroleum exploration, thanks to its high sensitivity to physical properties of subsurface. It can be used to enhance the seismic imaging and improve the geophysical interpretation which is crucial for reservoir characterization. However getting an accurate attenuation profile is not an easy task, this is due to complex mechanism of this parameter, although that many studies were carried out to understand it. The degree of difficulty increases for the media composed of carbonate rocks, known to be highly heterogeneous and with complex lithology. That is why few attenuation studies were done successfully in carbonate rocks. The main objectives of this study are, Getting an accurate and high resolution attenuation profiles from several oil fields. The resolution is very important target for us, because many reservoirs in Abu Dhabi oil fields are tight.Separation between different modes of wave attenuation (scattering and intrinsic attenuations).Correlation between the attenuation profiles and other logs (Porosity, resistivity, oil saturation…), in order to establish a relationship which can be used to detect the reservoir properties from the attenuation profiles.Comparison of attenuation estimated from VSP and sonic waveforms. Provide spatial distribution of attenuation in Abu Dhabi oil fields.To reach these objectives we implemented a robust processing flow and new methodology to estimate the attenuation from the downgoing waves of the compressional VSP data and waveforms acquired from several wells drilled in Abu Dhabi. The subsurface geology of this area is primarily composed of carbonate rocks and it is known to be highly fractured which complicates more the situation, then we separated successfully the intrinsic attenuation from the scattering. The results show that the scattering is significant and cannot be ignored. We found also a very interesting correlation between the attenuation profiles and the

  13. Mantle-Lid P Wave Attenuation in the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Lee, K.; Hong, T.

    2012-12-01

    The mantle-lid P wave, Pn, is the first arrival phase in regional distances. The Pn waves are widely analyzed for estimation of event sizes. Also, it is known that analysis of Pn waves is effective for discrimination of nuclear explosions from natural earthquakes. The attenuation of Pn waves provides us information on medium properties in mantle lid. It is crucial to understand the nature of Pn attenuation for correct estimation of event sizes from Pn amplitudes. We investigate the lateral variation of Pn attenuation in the mantle lid of the Korean Peninsula from vertical regional seismograms for events around the Korean Peninsula and Japanese islands. The number of events is 149, and the focal depths are less than 50 km. The seismic records with signal-to-noise ratios greater than 1.5 are analyzed. The number of stations is 121. The Pn quality factors are calculated using a two-station method in which ratios of Pn displacement spectra of stations on the same azimuths are used. The power-law frequency dependence term is estimated using a least-squares fitting for quality factors at frequencies from 0.37 Hz to 25 Hz. The number of station pairs is 3317. The average quality factor at 1 Hz is determined to be about 67, which is consistent with previous studies. We present the resultant Pn attenuation model, and discuss the correlations with geological and geophysical properties in the medium.

  14. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  15. NATURAL ATTENUATION OF CHLORINATED SOLVENTS

    EPA Science Inventory

    The protocol will simply describe in detail, with references and illustrations, the approach currently used by staff of the SPRD to evaluate natural attenuation of chlorinated solvents in ground water. Staff of SPRD, and staff of the Air Force Center for environmental excellence...

  16. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Gary Mavko; Jack Dvorkin

    2002-07-01

    In fully-saturated rock and at ultrasonic frequencies, the microscopic squirt flow induced between the stiff and soft parts of the pore space by an elastic wave is responsible for velocity-frequency dispersion and attenuation. In the seismic frequency range, it is the macroscopic cross-flow between the stiffer and softer parts of the rock. We use the latter hypothesis to introduce simple approximate equations for velocity-frequency dispersion and attenuation in a fully water saturated reservoir. The equations are based on the assumption that in heterogeneous rock and at a very low frequency, the effective elastic modulus of the fully-saturated rock can be estimated by applying a fluid substitution procedure to the averaged (upscaled) dry frame whose effective porosity is the mean porosity and the effective elastic modulus is the Backus-average (geometric mean) of the individual dry-frame elastic moduli of parts of the rock. At a higher frequency, the effective elastic modulus of the saturated rock is the Backus-average of the individual fully-saturated-rock elastic moduli of parts of the rock. The difference between the effective elastic modulus calculated separately by these two methods determines the velocity-frequency dispersion. The corresponding attenuation is calculated from this dispersion by using (e.g.) the standard linear solid attenuation model.

  17. Three-Dimensional Seismic Attenuation Structure in the Ryukyu Arc, Japan

    NASA Astrophysics Data System (ADS)

    Komatsu, M.; Takenaka, H.

    2015-12-01

    Tomographic studies have been conducted to retrieve 3D seismic attenuation structure around Japan Arc since 1980s. However, in the Ryukyu Arc, 3D attenuation structures has never been estimated. It is important to estimate the 3D attenuation structure in this region, since there are highly active volcanos and seismicity between the Okinawa Trough and the Ryukyu Trench. In this study, we estimate 3D seismic attenuation structure in the Ryukyu Arc. We use seismic waveform data recorded by seismic observation networks of NIED, JMA and Kagoshima University, from 2004/06 to 2014/05. We select seismic events of more than 4,500. Since the Ryukyu Arc region are so wide, we separate it into three subregions: Sakishima Islands, Okinawa Islands and Amami Islands subregions. Before calculating the attenuation quantity t*, the corner frequency of the source spectrum for each event is estimated by using an omega square model. The t* is estimated from the amplitude decay rate from the source-corrected spectra. We then invert the t* data to the attenuation structure by a 3D tomographic technique using the non-negative least squares method. Our estimated attenuation structure has the remarkable features: in Sakishima Islands subregion, high attenuation zone exists beneath northern Ishigaki Island. This region corresponds to the Okinawa Trough. High attenuation zone also exists beneath Hateruma Island in upper crust. It corresponds to the accretionary prism formed by subducting Philippine Sea Plate. In Amami Islands subregion, high attenuation zone is located along volcanic front. Low attenuation zone spreads over subducting Philippine Sea slab in all subregions.Acknowledgements: We used JMA Unified Hypocenter Catalogs and seismic waveform data recorded by NIED, JMA and Kagoshima University. We also used a computer program by Zhao et al. (1992, JGR) for the tomographic analysis.

  18. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  19. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    SciTech Connect

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis; Beddar, Sam; Beaulieu, Luc

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two different spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte

  20. Effects of 3D Velocity and Attenuation in the Tonga-Fiji Subduction Zone

    NASA Astrophysics Data System (ADS)

    Savage, B.; Wiens, D. A.; Tromp, J.

    2005-12-01

    The current understanding of a subduction zone's temperature and composition is limited. Much of our recent knowledge of subduction zones comes from earthquake locations, geochemical measurements, and lab based experiments. Recently, two studies of the Tonga-Fiji subduction zone have presented tomographic images of velocity and attenuation (Roth et al., 1999; Zhao et al., 1997). Roth et al. (2000) then combined these two tomographic models of the Tonga-Fiji subduction zone to derive an empirical relationship between changes in velocity and attenuation. This relationship agrees well with two independent, experimental data sets (Jackson et al., 1992; Sato et al., 1989). Using the tomographic velocity model and the empirical relationship between velocity and attenuation we create synthetic seismograms for the Tonga-Fiji subduction zone to test whether a simple increase in velocity accurately depicts this subduction zone. To construct the model we use the tomographic model of Zhao et al. (1997) to create a shear velocity model using a simple Vs/Vp ratio. Following Roth et al. (2000) these tomographic models are combined with the empirical relation between velocity and attenuation to create an attenuation model. The resulting synthetics are compared to recorded data to validate the tomographic velocity model and the empirical relation between velocity and attenuation. Any mismatch in this comparison will provide a basis for further refinement of the tomographic models and the velocity-attenuation relation. The synthetics are created using the SPECFEM3D global code (Komatitsch et al., 2002) with the new addition of a three-dimensional attenuation operator. Attenuation is simulated by a set of standard linear solids over the desired frequency range as described in Liu et al. (1976). Our initial results at a minimum period of 3.3 seconds suggest that the attenuation structure plays a minor role for the present source-receiver geometry. The addition of the 3D attenuation

  1. Stormwater Attenuation by Green Roofs

    NASA Astrophysics Data System (ADS)

    Sims, A.; O'Carroll, D. M.; Robinson, C. E.; Smart, C. C.

    2014-12-01

    Innovative municipal stormwater management technologies are urgently required in urban centers. Inadequate stormwater management can lead to excessive flooding, channel erosion, decreased stream baseflows, and degraded water quality. A major source of urban stormwater is unused roof space. Green roofs can be used as a stormwater management tool to reduce roof generated stormwater and generally improve the quality of runoff. With recent legislation in some North American cities, including Toronto, requiring the installation of green roofs on large buildings, research on the effectiveness of green roofs for stormwater management is important. This study aims to assess the hydrologic response of an extensive sedum green roof in London, Ontario, with emphasis on the response to large precipitation events that stress municipal stormwater infrastructure. A green roof rapidly reaches field capacity during large storm events and can show significantly different behavior before and after field capacity. At field capacity a green roof has no capillary storage left for retention of stormwater, but may still be an effective tool to attenuate peak runoff rates by transport through the green roof substrate. The attenuation of green roofs after field capacity is linked to gravity storage, where gravity storage is the water that is temporarily stored and can drain freely over time after field capacity has been established. Stormwater attenuation of a modular experimental green roof is determined from water balance calculations at 1-minute intervals. Data is used to evaluate green roof attenuation and the impact of field capacity on peak flow rates and gravity storage. In addition, a numerical model is used to simulate event based stormwater attenuation. This model is based off of the Richards equation and supporting theory of multiphase flow through porous media.

  2. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  3. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  4. Corrective measures evaluation report for Tijeras Arroyo groundwater.

    SciTech Connect

    Witt, Johnathan L; Orr, Brennon R.; Dettmers, Dana L.; Hall, Kevin A.; Howard, M. Hope

    2005-08-01

    This Corrective Measures Evaluation report was prepared as directed by a Compliance Order on Consent issued by the New Mexico Environment Department to document the process of selecting the preferred remedial alternative for Tijeras Arroyo Groundwater. Supporting information includes background concerning the site conditions and potential receptors and an overview of work performed during the Corrective Measures Evaluation. The evaluation of remedial alternatives included identifying and describing four remedial alternatives, an overview of the evaluation criteria and approach, comparing remedial alternatives to the criteria, and selecting the preferred remedial alternative. As a result of the Corrective Measures Evaluation, monitored natural attenuation of the contaminants of concern (trichloroethene and nitrate) is the preferred remedial alternative for implementation as the corrective measure for Tijeras Arroyo Groundwater. Design criteria to meet cleanup goals and objectives and the corrective measures implementation schedule for the preferred remedial alternative are also presented.

  5. New analytical approach for neutron beam-hardening correction.

    PubMed

    Hachouf, N; Kharfi, F; Hachouf, M; Boucenna, A

    2016-01-01

    In neutron imaging, the beam-hardening effect has a significant effect on quantitative and qualitative image interpretation. This study aims to propose a linearization method for beam-hardening correction. The proposed method is based on a new analytical approach establishing the attenuation coefficient as a function of neutron energy. Spectrum energy shift due to beam hardening is studied on the basis of Monte Carlo N-Particle (MCNP) simulated data and the analytical data. Good agreement between MCNP and analytical values has been found. Indeed, the beam-hardening effect is well supported in the proposed method. A correction procedure is developed to correct the errors of beam-hardening effect in neutron transmission, and therefore for projection data correction. The effectiveness of this procedure is determined by its application in correcting reconstructed images. PMID:26609685

  6. Statistics of rain-rate estimates for a single attenuating radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1976-01-01

    The effects of fluctuations in return power and the rain-rate/reflectivity relationship, are included in the estimates, as well as errors introduced in the attempt to recover the unattenuated return power. In addition to the Hitschfeld-Bordan correction, two alternative techniques are considered. The performance of the radar is shown to be dependent on the method by which attenuation correction is made.

  7. Online image corrections applied to a dedicated breast PET

    NASA Astrophysics Data System (ADS)

    Moliner, L.; González, A. J.; Correcher, C.; Benlloch, J. M.

    2016-03-01

    In this work, we present the online implementation of attenuation, scatter and random corrections using the LMEM algorithm for the dedicated breast PET named MAMMI. The attenuation correction is based on image segmentation, the random correction is derived from the rate estimation of single photon events and the scatter correction is determined by the dual energy window method. These three corrections are estimated and implemented in the reconstruction process without almost increasing the reconstruction time. The image quality is evaluated in terms of image uniformity and contrast using the reconstructed images of two custom-designed phantoms. When we apply the three corrections, the measured uniformity in the whole field of view is (10± 1)% compared to (17± 1)% without corrections. The adapted recovery contrast coefficients (normalized to 1) are approximately (0.80± 0.02) in hot areas, improving the value of (0.66± 0.07) obtained without corrections. The reconstruction processing time is also studied, finding an increment of around 7% when the three corrections are simultaneously included. Finally, 25 breast image datasets are also analyzed. The average acquisition time per patient is around 1200 seconds and the reconstruction times with corrections vary from 100 to 400 seconds using (1× 1× 1) mm3 voxel size and from 300 to 1800 seconds using (0.5× 0.5× 0.5) mm3 voxel size. These reconstructions are performed with a virtual pixel size of (1.6× 1.6) mm2 and twelve iterations.

  8. Ferrite attenuator modulation improves antenna performance

    NASA Technical Reports Server (NTRS)

    Hooks, J. C.; Larson, S. G.; Shorkley, F. H.; Williams, B. T.

    1970-01-01

    Ferrite attenuator inserted into appropriate waveguide reduces the gain of the antenna element which is causing interference. Modulating the ferrite attenuator to change the antenna gain at the receive frequency permits ground tracking until the antenna is no longer needed.

  9. Modification of Kirchhoff migration with variable sound speed and attenuation for acoustic imaging of media and application to tomographic imaging of the breast

    PubMed Central

    Schmidt, Steven; Duric, Nebojsa; Li, Cuiping; Roy, Olivier; Huang, Zhi-Feng

    2011-01-01

    Purpose: To explore the feasibility of improving cross-sectional reflection imaging of the breast using refractive and attenuation corrections derived from ultrasound tomography data. Methods: The authors have adapted the planar Kirchhoff migration method, commonly used in geophysics to reconstruct reflection images, for use in ultrasound tomography imaging of the breast. Furthermore, the authors extended this method to allow for refractive and attenuative corrections. Using clinical data obtained with a breast imaging prototype, the authors applied this method to generate cross-sectional reflection images of the breast that were corrected using known distributions of sound speed and attenuation obtained from the same data. Results: A comparison of images reconstructed with and without the corrections showed varying degrees of improvement. The sound speed correction resulted in sharpening of detail, while the attenuation correction reduced the central darkening caused by path length dependent losses. The improvements appeared to be greatest when dense tissue was involved and the least for fatty tissue. These results are consistent with the expectation that denser tissues lead to both greater refractive effects and greater attenuation. Conclusions: Although conventional ultrasound techniques use time-gain control to correct for attenuation gradients, these corrections lead to artifacts because the true attenuation distribution is not known. The use of constant sound speed leads to additional artifacts that arise from not knowing the sound speed distribution. The authors show that in the context of ultrasound tomography, it is possible to construct reflection images of the breast that correct for inhomogeneous distributions of both sound speed and attenuation. PMID:21452737

  10. Quantitative radiography enabled by slot collimation and a novel scatter correction technique on a large-area flat panel x-ray detector

    NASA Astrophysics Data System (ADS)

    Yue, Meghan L.; Boden, Adam E.; Sabol, John M.

    2009-02-01

    In addition to causing loss of contrast and blurring in an image, scatter also makes quantitative measurements of xray attenuation impossible. Many devices, methods, and models have been developed to eliminate, estimate, and correct for the effects of scatter. Although these techniques can reduce the impact of scatter in a large-area image, no methods have proven to be practical and sufficient to enable quantitative analysis of image data in a routine clinical setting. This paper describes a method of scatter correction which uses moderate x-ray collimation in combination with a correction algorithm operating on data obtained from large-area flat panel detector images. The method involves acquiring slot collimated images of the object, and utilizing information from outside of the collimated region, in addition to a priori data, to estimate the scatter within the collimated region. This method requires no increase dose to the patient while providing high image quality and accurate estimates of the primary x-ray data. This scatter correction technique was validated through beam stop experiments and comparison of theoretically calculated and measured contrast of thin aluminum and polymethylmethacrelate objects. Measurements taken with various background material thicknesses, both with and without a grid, showed that the slot-scatter corrected contrast and the theoretical contrast were not significantly different given a 99% confidence interval. However, the uncorrected contrast was found to be significantly different from the corrected and theoretical contrasts. These findings indicate that this method of scatter correction can eliminate the effect of scatter on contrast and potentially enable quantitative x-ray imaging.

  11. Correct sizing decisions key to success.

    PubMed

    Atkinson, Mark

    2015-04-01

    According to specialist in optimised resource management, Veolia, combined heat and power (CHP) 'has been proven for its effectiveness in reducing carbon emissions, thanks to the efficient way that the technology simultaneously derives power and heat from the combustion process'. However, as Mark Atkinson, the company's operations manager, explains, it is only through specifying and designing the plant accurately that the correct load, and therefore the desired savings, can be realized. PMID:26281426

  12. Global orbit corrections

    SciTech Connect

    Symon, K.

    1987-11-01

    There are various reasons for preferring local (e.g., three bump) orbit correction methods to global corrections. One is the difficulty of solving the mN equations for the required mN correcting bumps, where N is the number of superperiods and m is the number of bumps per superperiod. The latter is not a valid reason for avoiding global corrections, since, we can take advantage of the superperiod symmetry to reduce the mN simultaneous equations to N separate problems, each involving only m simultaneous equations. Previously, I have shown how to solve the general problem when the machine contains unknown magnet errors of known probability distribution; we made measurements of known precision of the orbit displacements at a set of points, and we wish to apply correcting bumps to minimize the weighted rms orbit deviations. In this report, we will consider two simpler problems, using similar methods. We consider the case when we make M beam position measurements per superperiod, and we wish to apply an equal number M of orbit correcting bumps to reduce the measured position errors to zero. We also consider the problem when the number of correcting bumps is less than the number of measurements, and we wish to minimize the weighted rms position errors. We will see that the latter problem involves solving equations of a different form, but involving the same matrices as the former problem.

  13. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    PubMed Central

    Huang, Chuan; Petibon, Yoann; Ouyang, Jinsong; Reese, Timothy G.; Ahlman, Mark A.; Bluemke, David A.; El Fakhri, Georges

    2015-01-01

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PET using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide

  14. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    SciTech Connect

    Huang, Chuan; Petibon, Yoann; Ouyang, Jinsong; El Fakhri, Georges; Reese, Timothy G.; Ahlman, Mark A.; Bluemke, David A.

    2015-02-15

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PET using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide

  15. ENHANCEMENTS TO NATURAL ATTENUATION: SELECTED CASE STUDIES

    SciTech Connect

    Vangelas, K; W. H. Albright, W; E. S. Becvar, E; C. H. Benson, C; T. O. Early, T; E. Hood, E; P. M. Jardine, P; M. Lorah, M; E. Majche, E; D. Major, D; W. J. Waugh, W; G. Wein, G; O. R. West, O

    2007-05-15

    In 2003 the US Department of Energy (DOE) embarked on a project to explore an innovative approach to remediation of subsurface contaminant plumes that focused on introducing mechanisms for augmenting natural attenuation to achieve site closure. Termed enhanced attenuation (EA), this approach has drawn its inspiration from the concept of monitored natural attenuation (MNA).

  16. Acetaminophen Attenuates Lipid Peroxidation in Children Undergoing Cardiopulmonary Bypass

    PubMed Central

    Simpson, Scott A.; Zaccagni, Hayden; Bichell, David P.; Christian, Karla G.; Mettler, Bret A.; Donahue, Brian S.; Roberts, L. Jackson; Pretorius, Mias

    2014-01-01

    Objective Hemolysis, occurring during cardiopulmonary bypass (CPB), is associated with lipid peroxidation and postoperative acute kidney injury (AKI). Acetaminophen (ApAP) inhibits lipid peroxidation catalyzed by hemeproteins and in an animal model attenuated rhabdomyolysis-induced AKI. This pilot study tests the hypothesis that ApAP attenuates lipid peroxidation in children undergoing CPB. Design Single center prospective randomized double blinded study. Setting University-affiliated pediatric hospital. Patients Thirty children undergoing elective surgical correction of a congenital heart defect. Interventions Patients were randomized to ApAP (OFIRMEV® (acetaminophen) injection, Cadence Pharmaceuticals, San Diego, CA) or placebo every 6 hours for 4 doses starting before the onset of CPB. Measurement and Main Results Markers of hemolysis, lipid peroxidation (isofurans and F2-isoprostanes) and AKI were measured throughout the perioperative period. CPB was associated with a significant increase in free hemoglobin (from a pre-bypass level of 9.8±6.2 mg/dl to a peak of 201.5±42.6 mg/dl post-bypass). Plasma and urine isofuran and F2-isoprostane concentrations increased significantly during surgery. The magnitude of increase in plasma isofurans was greater than the magnitude in increase in plasma F2-isoprostanes. ApAP attenuated the increase in plasma isofurans compared to placebo (P=0.02 for effect of study drug). There was no significant effect of ApAP on plasma F2-isoprostanes or urinary makers of lipid peroxidation. ApAP did not affect postoperative creatinine, urinary neutrophil gelatinase-associated lipocalin or prevalence of AKI. Conclusion CPB in children is associated with hemolysis and lipid peroxidation. ApAP attenuated the increase in plasma isofuran concentrations. Future studies are needed to establish whether other therapies that attenuate or prevent the effects of free hemoglobin result in more effective inhibition of lipid peroxidation in patients

  17. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Gary Mavko; Jack Dvorkin

    2002-01-01

    In Section 1 of this first report we will describe the work we are doing to collect and analyze rock physics data for the purpose of modeling seismic attenuation from other measurable quantities such as porosity, water saturation, clay content and net stress. This work and other empirical methods to be presented later, will form the basis for ''Q pseudo-well modeling'' that is a key part of this project. In Section 2 of this report, we will show the fundamentals of a new method to extract Q, dispersion, and attenuation from field seismic data. The method is called Gabor-Morlet time-frequency decomposition. This technique has a number of advantages including greater stability and better time resolution than spectral ratio methods.

  18. Chlorine signal attenuation in concrete.

    PubMed

    Naqvi, A A; Maslehuddin, M; ur-Rehman, Khateeb; Al-Amoudi, O S B

    2015-11-01

    The intensity of prompt gamma-ray was measured at various depths from chlorine-contaminated silica fume (SF) concrete slab concrete specimens using portable neutron generator-based prompt gamma-ray setup. The intensity of 6.11MeV chloride gamma-rays was measured from the chloride contaminated slab at distance of 15.25, 20.25, 25.25, 30.25 and 35.25cm from neutron target in a SF cement concrete slab specimens. Due to attenuation of thermal neutron flux and emitted gamma-ray intensity in SF cement concrete at various depths, the measured intensity of chlorine gamma-rays decreases non-linearly with increasing depth in concrete. A good agreement was noted between the experimental results and the results of Monte Carlo simulation. This study has provided useful experimental data for evaluating the chloride contamination in the SF concrete utilizing gamma-ray attenuation method. PMID:26218450

  19. Non-rigid dual respiratory and cardiac motion correction methods after, during, and before image reconstruction for 4D cardiac PET

    NASA Astrophysics Data System (ADS)

    Feng, Tao; Wang, Jizhe; Fung, George; Tsui, Benjamin

    2016-01-01

    Respiratory motion (RM) and cardiac motion (CM) degrade the quality and resolution in cardiac PET scans. We have developed non-rigid motion estimation methods to estimate both RM and CM based on 4D cardiac gated PET data alone, and compensate the dual respiratory and cardiac (R&C) motions after (MCAR), during (MCDR), and before (MCBR) image reconstruction. In all three R&C motion correction methods, attenuation-activity mismatch effect was modeled by using transformed attenuation maps using the estimated RM. The difference of using activity preserving and non-activity preserving models in R&C correction was also studied. Realistic Monte Carlo simulated 4D cardiac PET data using the 4D XCAT phantom and accurate models of the scanner design parameters and performance characteristics at different noise levels were employed as the known truth and for method development and evaluation. Results from the simulation study suggested that all three dual R&C motion correction methods provide substantial improvement in the quality of 4D cardiac gated PET images as compared with no motion correction. Specifically, the MCDR method yields the best performance for all different noise levels compared with the MCAR and MCBR methods. While MCBR reduces computational time dramatically but the resultant 4D cardiac gated PET images has overall inferior image quality when compared to that from the MCAR and MCDR approaches in the ‘almost’ noise free case. Also, the MCBR method has better noise handling properties when compared with MCAR and provides better quantitative results in high noise cases. When the goal is to reduce scan time or patient radiation dose, MCDR and MCBR provide a good compromise between image quality and computational times.

  20. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  1. Accurate ab initio vibrational energies of methyl chloride

    SciTech Connect

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  2. Accurate ab initio vibrational energies of methyl chloride.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH3 (35)Cl and CH3 (37)Cl. The respective PESs, CBS-35( HL), and CBS-37( HL), are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35( HL) and CBS-37( HL) PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm(-1), respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs. PMID:26133427

  3. Accurate ab initio vibrational energies of methyl chloride

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-06-01

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH335Cl and CH337Cl. The respective PESs, CBS-35 HL, and CBS-37 HL, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm-1, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs.

  4. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  5. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory. PMID:18519546

  6. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    PubMed

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  8. Measuring work stress among correctional staff: a Rasch measurement approach.

    PubMed

    Higgins, George E; Tewksbury, Richard; Denney, Andrew

    2012-01-01

    Today, the amount of stress the correctional staff endures at work is an important issue. Research has addressed this issue, but has yielded no consensus as to a properly calibrated measure of perceptions of work stress for correctional staff. Using data from a non-random sample of correctional staff (n = 228), the Rasch model was used to assess whether a specific measure of work stress would fit the model. Results show that three items rather than six items accurately represented correctional staff perceptions of work stress. PMID:23270982

  9. Method of absorbance correction in a spectroscopic heating value sensor

    SciTech Connect

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  10. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    ERIC Educational Resources Information Center

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  11. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  12. Ricean Bias Correction in Linear Polarization Observation

    NASA Astrophysics Data System (ADS)

    Sohn, Bong Won

    2011-12-01

    I developed an enhanced correction method for Ricean bias which occurs in linear polarization measurement. Two known methods for Ricean bias correction are reviewed. In low signal-to-noise area, the method based on the mode of the equation gives better representation of the fractional polarization. But a caution should be given that the accurate estimation of noise level, i.e. σ of the polarized flux, is important. The maximum likelihood method is better choice for high signal-to-noise area. I suggest a hybrid method which uses the mode of the equation at the low signal-to-noise area and takes the maximum likelihood method at the high signal-to-noise area. A modified correction coefficient for the mode solution is proposed. The impact on the depolarization measure analysis is discussed.

  13. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  14. moco: Fast Motion Correction for Calcium Imaging

    PubMed Central

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  15. Improved target recognition with live atmospheric correction

    NASA Astrophysics Data System (ADS)

    Archer, Cynthia; Morgenstern, James

    2013-05-01

    Hyperspectral airborne sensing systems frequently employ spectral signature databases to detect materials. To achieve high detection and low false alarm rates, it is critical to retrieve accurate reflectance values from the camera's digital number (dn) output. A one-time camera calibration converts dn values to reflectance. However, changes in solar angle and atmospheric conditions distort the reflected energy, reducing detection performance of the system. Changes in solar angle and atmospheric conditions introduce both additive (offset) and multiplicative (gain) effects for each waveband. A gain and offset correction can mitigate these effects. Correction methods based on radiative transfer models require equipment to measure solar angle and atmospheric conditions. Other methods use known reference materials in the scene to calculate the correction, but require an operator to identify the location of these materials. Our unmanned airborne vehicles application can use no additional equipment or require operator intervention. Applicable automated correction approaches typically analyze gross scene statistics to find the gain and offset values. Airborne hyperspectral systems have high ground resolution but limited fields-of-view, so an individual frame does not include all the variation necessary to accurately calculate global statistics. In the present work we present our novel approach to the automatic estimation of atmospheric and solar effects from the hyperspectral data. Our approach is based on Hough transform matching of background spectral signatures with materials extracted from the scene. Scene materials are identified with low complexity agglomerative clustering. Detection results with data gathered from recent field tests are shown.

  16. 75 FR 68409 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ..., the Presidential Determination number should read ``2010-14'' (Presidential Sig.) [FR Doc. C1-2010... Migration Needs Resulting From Flooding In Pakistan Correction In Presidential document 2010-27673...

  17. Corrected Age for Preemies

    MedlinePlus

    ... Prenatal Baby Bathing & Skin Care Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Preemie > Corrected Age ...

  18. Correcting Hubble Vision.

    ERIC Educational Resources Information Center

    Shaw, John M.; Sheahen, Thomas P.

    1994-01-01

    Describes the theory behind the workings of the Hubble Space Telescope, the spherical aberration in the primary mirror that caused a reduction in image quality, and the corrective device that compensated for the error. (JRH)

  19. Results of the SDCS (Special Data Collection System) attenuation experiment. Technical report

    SciTech Connect

    Der, Z.A.; McElfresh, T.W.; O'Donnell, A.

    1981-10-30

    Investigation of teleseismic arrivals at test sites in the western United States (WUS), a site on the Canadian shield and two sites in the northeastern United States revealed marked differences in mantle attenuation among these sites. All sites in the WUS show high attenuation in the underlying mantle, the sites in the northeastern U.S. appear to be intermediate between the WUS and the shield sites. This pattern fits well into the results of broader regional studies of amplitude anomalies, and spectral variations in both P and S waves. The high frequency content of teleseismic arrivals cannot be reconciled with the results of long period attenuation studies unless a frequency dependence of Q is assumed in the Earth. Preliminary curves for t vs. frequency are presented for shield and shield-to-tectonic type paths. These results demonstrate that yield estimates of explosions in different tectonic environments have to be corrected for mantle attenuation.

  20. A prototype piecewise-linear dynamic attenuator.

    PubMed

    Hsieh, Scott S; Peng, Mark V; May, Christopher A; Shunhavanich, Picha; Fleischmann, Dominik; Pelc, Norbert J

    2016-07-01

    The piecewise-linear dynamic attenuator has been proposed as a mechanism in CT scanning for personalizing the x-ray illumination on a patient- and application-specific basis. Previous simulations have shown benefits in image quality, scatter, and dose objectives. We report on the first prototype implementation. This prototype is reduced in scale and speed and is integrated into a tabletop CT system with a smaller field of view (25 cm) and longer scan time (42 s) compared to a clinical system. Stainless steel wedges were machined and affixed to linear actuators, which were in turn held secure by a frame built using rapid prototyping technologies. The actuators were computer-controlled, with characteristic noise of about 100 microns. Simulations suggest that in a clinical setting, the impact of actuator noise could lead to artifacts of only 1 HU. Ring artifacts were minimized by careful design of the wedges. A water beam hardening correction was applied and the scan was collimated to reduce scatter. We scanned a 16 cm water cylinder phantom as well as an anthropomorphic pediatric phantom. The artifacts present in reconstructed images are comparable to artifacts normally seen with this tabletop system. Compared to a flat-field reference scan, increased detectability at reduced dose is shown and streaking is reduced. Artifacts are modest in our images and further refinement is possible. Issues of mechanical speed and stability in the challenging clinical CT environment will be addressed in a future design. PMID:27284705

  1. A prototype piecewise-linear dynamic attenuator

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Peng, Mark V.; May, Christopher A.; Shunhavanich, Picha; Fleischmann, Dominik; Pelc, Norbert J.

    2016-07-01

    The piecewise-linear dynamic attenuator has been proposed as a mechanism in CT scanning for personalizing the x-ray illumination on a patient- and application-specific basis. Previous simulations have shown benefits in image quality, scatter, and dose objectives. We report on the first prototype implementation. This prototype is reduced in scale and speed and is integrated into a tabletop CT system with a smaller field of view (25 cm) and longer scan time (42 s) compared to a clinical system. Stainless steel wedges were machined and affixed to linear actuators, which were in turn held secure by a frame built using rapid prototyping technologies. The actuators were computer-controlled, with characteristic noise of about 100 microns. Simulations suggest that in a clinical setting, the impact of actuator noise could lead to artifacts of only 1 HU. Ring artifacts were minimized by careful design of the wedges. A water beam hardening correction was applied and the scan was collimated to reduce scatter. We scanned a 16 cm water cylinder phantom as well as an anthropomorphic pediatric phantom. The artifacts present in reconstructed images are comparable to artifacts normally seen with this tabletop system. Compared to a flat-field reference scan, increased detectability at reduced dose is shown and streaking is reduced. Artifacts are modest in our images and further refinement is possible. Issues of mechanical speed and stability in the challenging clinical CT environment will be addressed in a future design.

  2. Adaptable DC offset correction

    NASA Technical Reports Server (NTRS)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)

    2009-01-01

    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  3. Evaluation of Monitoring Approaches for Natural Attenuation

    NASA Astrophysics Data System (ADS)

    Roll, L. L.; Labolle, E. M.; Fogg, G. E.

    2008-12-01

    Monitored natural attenuation (MNA) can be a useful alternative to active remediation, however, firm conclusions regarding effectiveness of MNA may be elusive because of multiple processes that can produce similar, apparent trends in chemical concentrations in the heterogeneous subsurface. Current monitoring approaches need to be critically evaluated for typical field settings, such as heterogeneous alluvial aquifer systems, because spatially varying aquifer properties create non uniform flow fields that greatly influence transport processes, producing complex plume behavior that may not be adequately depicted by monitoring networks. Highly-resolved simulations of flow and conservative transport in a typical alluvial aquifer system facilitate a critical review of three monitoring approaches including estimation of mass balance from sampling along the plume centerline, estimation of mass balance from fine grid sampling, and estimation of mass flux from sampling along cross sections. The simulation procedure involves generation of unconditional transition-probability fields of hydrofacies distributions, simulation of steady state flow followed by simulation of conservative transport using a highly accurate random walk particle method (RWHET). The results elucidate limitations and potential pitfalls of the monitoring methods and use of simple models in typically heterogeneous systems. For example, simulations show that because of the system complexity, apparent concentration trends in space and time can be falsely attributed to biodegradation when none is occurring if simplistic models are used to interpret the data. Measured concentrations alone are likely insufficient to judge effectiveness of MNA.

  4. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  5. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  6. Implication of seismic attenuation for gas hydrate resource characterization, Mallik, Mackenzie Delta, Canada

    NASA Astrophysics Data System (ADS)

    Bellefleur, G.; Riedel, M.; Brent, T.; Wright, F.; Dallimore, S. R.

    2007-10-01

    Wave attenuation is an important physical property of hydrate-bearing sediments that is rarely taken into account in site characterization with seismic data. We present a field example showing improved images of hydrate-bearing sediments on seismic data after compensation of attenuation effects. Compressional quality factors estimated from zero-offset Vertical Seismic Profiling data acquired at Mallik, Northwest Territories, Canada, demonstrate significant wave attenuation for hydrate-bearing sediments. These results are in agreement with previous attenuation estimates obtained from sonic logs and crosshole data at different frequency intervals. The application of an inverse Q-filter to compensate attenuation effects of permafrost and hydrate-bearing sediments improved the resolution of surface 3D seismic data and its correlation with log data, particularly for the shallowest gas hydrate interval. Compensation of the attenuation effects of the permafrost likely explains most of the improvements for the shallow gas hydrate zone. Our results show that characterization of the Mallik gas hydrates with seismic data not corrected for attenuation would tend to overestimate thicknesses and lateral extent of hydrate-bearing strata and hence, the volume of hydrates in place.

  7. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  8. Natural and enhanced attenuation of metals

    SciTech Connect

    Rouse, J.V.; Pyrih, R.Z.

    1996-12-31

    The ability of natural earthen materials to attenuate the movement of contamination can be quantified in relatively simple geochemical experiments. In addition, the ability of subsurface material to attenuate potential contaminants can be enhanced through modifications to geochemical parameters such as pH or redox conditions. Such enhanced geochemical attenuation has been demonstrated at a number of sites to be a cost-effective alternative to conventional pump and treat operations. This paper describes the natural attenuation reactions which occur in the subsurface, and the way to quantify such attenuation. It also introduces the concept of enhanced geochemical attenuation, wherein naturally-occurring geochemical reactions can be used to achieve in situ fixation. The paper presents examples where such natural and enhanced attenuation have been implemented as a part of an overall remedy.

  9. Computing the Seismic Attenuation in Complex Porous Materials

    NASA Astrophysics Data System (ADS)

    Masson, Yder Jean

    produces maps of the spatial distribution of Young's modulus. These maps are then used in combination with the aforementioned numerical methods to compute accurately the attenuation as a function of frequency associated with real rock samples.

  10. Two-dimensional acoustic attenuation mapping of high-temperature interstitial ultrasound lesions

    NASA Astrophysics Data System (ADS)

    Tyréus, Per Daniel; Diederich, Chris

    2004-02-01

    Acoustic attenuation change in biological tissues with temperature and time is a critical parameter for interstitial ultrasound thermal therapy treatment planning and applicator design. Earlier studies have not fully explored the effects on attenuation of temperatures (75-95 °C) and times (5-15 min) common in interstitial ultrasound treatments. A scanning transmission ultrasound attenuation measurement system was devised and used to measure attenuation changes due to these types of thermal exposures. To validate the approach and to loosely define expected values, attenuation changes in degassed ex vivo bovine liver, bovine brain and chicken muscle were measured after 10 min exposures in a water bath to temperatures up to 90 °C. Maximum attenuation increases of approximately seven, four and two times the values at 37 °C were measured for the three tissue models at 5 MHz. By using the system to scan over lesions produced using interstitial ultrasound applicators, 2D contour maps of attenuation were produced. Attenuation profiles measured through the centrelines of lesions showed that attenuation was highest close to the applicator and decreased with radial distance, as expected with decreasing thermal exposure. Attenuation values measured in profiles through lesions were also shown to decrease with reduced power to the applicator. Attenuation increases in 2D maps of interstitial ultrasound lesions in ex vivo chicken breast, bovine liver and bovine brain were correlated with visible tissue coagulation. While regions of visible coagulation corresponded well to contours of attenuation increase in liver and chicken, no lesion was visible under the same experimental conditions in brain, due primarily to the heterogeneity of the tissue. Acoustic and biothermal simulations were employed to show that attenuation models taking into account these attenuation changes at higher temperatures and longer times were better able to fit experimental data than previous models. These

  11. Extremely Accurate On-Orbit Position Accuracy using TDRSS

    NASA Technical Reports Server (NTRS)

    Stocklin, Frank; Toral, Marco; Bar-Sever, Yoaz; Rush, John

    2006-01-01

    NASA is planning to launch a new service for Earth satellites providing them with precise GPS differential corrections and other ancillary information enabling decimeter level orbit determination accuracy and nanosecond time-transfer accuracy, onboard, in real-time. The TDRSS Augmentation Service for Satellites (TASS) will broadcast its message on the S-band multiple access forward channel of NASA s Tracking and Data Relay Satellite System (TDRSS). The satellite's phase array antenna has been configured to provide a wide beam, extending coverage up to 1000 km altitude over the poles. Global coverage will be ensured with broadcast from three or more TDRSS satellites. The GPS differential corrections are provided by the NASA Global Differential GPS (GDGPS) System, developed and operated by JPL. The GDGPS System employs global ground network of more than 70 GPS receivers to monitor the GPS constellation in real time. The system provides real-time estimates of the GPS satellite states, as well as many other real-time products such as differential corrections, global ionospheric maps, and integrity monitoring. The unique multiply redundant architecture of the GDGPS System ensures very high reliability, with 99.999% demonstrated since the inception of the system in early 2000. The estimated real time GPS orbit and clock states provided by the GDGPS system are accurate to better than 20 cm 3D RMS, and have been demonstrated to support sub-decimeter real time positioning and orbit determination for a variety of terrestrial, airborne, and spaceborne applications. In addition to the GPS differential corrections, TASS will provide real-time Earth orientation and solar flux information that enable precise onboard knowledge of the Earth-fixed position of the spacecraft, and precise orbit prediction and planning capabilities. TASS will also provide 5 seconds alarms for GPS integrity failures based on the unique GPS integrity monitoring service of the GDGPS System.

  12. Accurate equilibrium structures of fluoro- and chloroderivatives of methane

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Demaison, Jean; Rudolph, Heinz Dieter

    2014-11-01

    This work is a systematic study of molecular structure of fluoro-, chloro-, and fluorochloromethanes. For the first time, the accurate ab initio structure is computed for 10 molecules (CF4, CClF3, CCl2F2, CCl3F, CHClF2, CHCl2F, CH2F2, CH2ClF, CH2Cl2, and CCl4) at the coupled cluster level of electronic structure theory including single and double excitations augmented by a perturbational estimate of the effects of connected triple excitations [CCSD(T)] with all electrons being correlated and Gaussian basis sets of at least quadruple-ζ quality. Furthermore, when possible, namely for the molecules CH2F2, CH2Cl2, CH2ClF, CHClF2, and CCl2F2, accurate semi-experimental equilibrium (rSEe) structure has also been determined. This is achieved through a least-squares structural refinement procedure based on the equilibrium rotational constants of all available isotopomers, determined by correcting the experimental ground-state rotational constants with computed ab initio vibration-rotation interaction constants and electronic g-factors. The computed and semi-experimental equilibrium structures are in excellent agreement with each other, but the rSEe structure is generally more accurate, in particular for the CF and CCl bond lengths. The carbon-halogen bond length is discussed within the framework of the ligand close-packing model as a function of the atomic charges. For this purpose, the accurate equilibrium structures of some other molecules with alternative ligands, such as CH3Li, CF3CCH, and CF3CN, are also computed.

  13. Geological Corrections in Gravimetry

    NASA Astrophysics Data System (ADS)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  14. DNA barcode data accurately assign higher spider taxa.

    PubMed

    Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the

  15. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  16. Atmospheric degradation correction of terahertz beams using multiscale signal restoration.

    PubMed

    Ryu, Choonwoo; Kong, Seong G

    2010-02-10

    We present atmospheric degradation correction of terahertz (THz) beams based on multiscale signal decomposition and a combination of a Wiener deconvolution filter and artificial neural networks. THz beams suffer from strong attenuation by water molecules in the air. The proposed signal restoration approach finds the filter coefficients from a pair of reference signals previously measured from low-humidity conditions and current background air signals. Experimental results with two material samples of different chemical compositions demonstrate that the multiscale signal restoration technique is effective in correcting atmospheric degradation compared to individual and non-multiscale approaches. PMID:20154764

  17. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-04-01

    In this report we will show some new Q related seismic attributes on the Burlington-Seitel data set. One example will be called Energy Absorption Attribute (EAA) and is based on a spectral analysis. The EAA algorithm is designed to detect a sudden increase in the rate of exponential decay in the relatively higher frequency portion of the spectrum. In addition we will show results from a hybrid attribute that combines attenuation with relative acoustic impedance to give a better indication of commercial gas saturation.

  18. Imaging Rayleigh wave attenuation with USArray

    NASA Astrophysics Data System (ADS)

    Bao, Xueyang; Dalton, Colleen A.; Jin, Ge; Gaherty, James B.; Shen, Yang

    2016-04-01

    The EarthScope USArray provides an opportunity to obtain detailed images of the continental upper mantle at an unprecedented scale. The majority of mantle models derived from USArray data to date contain spatial variations in seismic-wave speed; however, in many cases these data sets do not by themselves allow a non-unique interpretation. Joint interpretation of seismic attenuation and velocity models can improve upon the interpretations based only on velocity and provide important constraints on the temperature, composition, melt content, and volatile content of the mantle. The surface-wave amplitudes that constrain upper-mantle attenuation are sensitive to factors in addition to attenuation, including the earthquake source excitation, focusing and defocusing by elastic structure, and local site amplification. Because of the difficulty of isolating attenuation from these other factors, little is known about the attenuation structure of the North American upper mantle. In this study, Rayleigh wave travel time and amplitude in the period range 25-100 s are measured using an interstation cross-correlation technique, which takes advantage of waveform similarity at nearby stations. Several estimates of Rayleigh wave attenuation and site amplification are generated at each period, using different approaches to separate the effects of attenuation and local site amplification on amplitude. It is assumed that focusing and defocusing effects can be described by the Laplacian of the travel-time field. All approaches identify the same large-scale patterns in attenuation, including areas where the attenuation values are likely contaminated by unmodelled focusing and defocusing effects. Regionally averaged attenuation maps are constructed after removal of the contaminated attenuation values, and the variations in intrinsic shear attenuation that are suggested by these Rayleigh wave attenuation maps are explored.

  19. Imaging Rayleigh wave attenuation with USArray

    NASA Astrophysics Data System (ADS)

    Bao, Xueyang; Dalton, Colleen A.; Jin, Ge; Gaherty, James B.; Shen, Yang

    2016-07-01

    The EarthScope USArray provides an opportunity to obtain detailed images of the continental upper mantle at an unprecedented scale. The majority of mantle models derived from USArray data to date contain spatial variations in seismic-wave speed; however, in many cases these data sets do not by themselves allow a non-unique interpretation. Joint interpretation of seismic attenuation and velocity models can improve upon the interpretations based only on velocity and provide important constraints on the temperature, composition, melt content, and volatile content of the mantle. The surface wave amplitudes that constrain upper-mantle attenuation are sensitive to factors in addition to attenuation, including the earthquake source excitation, focusing and defocusing by elastic structure, and local site amplification. Because of the difficulty of isolating attenuation from these other factors, little is known about the attenuation structure of the North American upper mantle. In this study, Rayleigh wave traveltime and amplitude in the period range 25-100 s are measured using an interstation cross-correlation technique, which takes advantage of waveform similarity at nearby stations. Several estimates of Rayleigh wave attenuation and site amplification are generated at each period, using different approaches to separate the effects of attenuation and local site amplification on amplitude. It is assumed that focusing and defocusing effects can be described by the Laplacian of the traveltime field. All approaches identify the same large-scale patterns in attenuation, including areas where the attenuation values are likely contaminated by unmodelled focusing and defocusing effects. Regionally averaged attenuation maps are constructed after removal of the contaminated attenuation values, and the variations in intrinsic shear attenuation that are suggested by these Rayleigh wave attenuation maps are explored.

  20. Effect of scatter correction on the compartmental measurement of striatal and extrastriatal dopamine D2 receptors using [123I]epidepride SPET.

    PubMed

    Fujita, Masahiro; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Zoghbi, Sami S; Seneca, Nicholas; Tipre, Dnyanesh; Seibyl, John P; Innis, Robert B; Iida, Hidehiro

    2004-05-01

    Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D(2) receptors using [(123)I]epidepride. Eight healthy human subjects [age 30+/-8 (range 22-46) years] participated in a study with a bolus injection of 373+/-12 (354-389) MBq [(123)I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry micro (SC) and without scatter correction using broad-beam micro (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [(18)F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. PMID:14730406

  1. Peteye detection and correction

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Luo, Huitao; Tretter, Daniel

    2007-01-01

    Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.

  2. Aureolegraph internal scattering correction.

    PubMed

    DeVore, John; Villanucci, Dennis; LePage, Andrew

    2012-11-20

    Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299

  3. Partial Volume Correction in Quantitative Amyloid Imaging

    PubMed Central

    Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Marcus, Daniel S.; Ances, Beau M.; Bateman, Randall J.; Cairns, Nigel J.; Aldea, Patricia; Cash, Lisa; Christensen, Jon J.; Friedrichsen, Karl; Hornbeck, Russ C.; Farrar, Angela M.; Owen, Christopher J.; Mayeux, Richard; Brickman, Adam M.; Klunk, William; Price, Julie C.; Thompson, Paul M.; Ghetti, Bernardino; Saykin, Andrew J.; Sperling, Reisa A.; Johnson, Keith A.; Schofield, Peter R.; Buckles, Virginia; Morris, John C.; Benzinger, Tammie. LS.

    2014-01-01

    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714

  4. Partial volume correction in quantitative amyloid imaging.

    PubMed

    Su, Yi; Blazey, Tyler M; Snyder, Abraham Z; Raichle, Marcus E; Marcus, Daniel S; Ances, Beau M; Bateman, Randall J; Cairns, Nigel J; Aldea, Patricia; Cash, Lisa; Christensen, Jon J; Friedrichsen, Karl; Hornbeck, Russ C; Farrar, Angela M; Owen, Christopher J; Mayeux, Richard; Brickman, Adam M; Klunk, William; Price, Julie C; Thompson, Paul M; Ghetti, Bernadino; Saykin, Andrew J; Sperling, Reisa A; Johnson, Keith A; Schofield, Peter R; Buckles, Virginia; Morris, John C; Benzinger, Tammie L S

    2015-02-15

    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714

  5. Optical Correction of Aphakia in Children

    PubMed Central

    Baradaran-Rafii, Alireza; Shirzadeh, Ebrahim; Eslani, Medi; Akbari, Mitra

    2014-01-01

    There are several reasons for which the correction of aphakia differs between children and adults. First, a child's eye is still growing during the first few years of life and during early childhood, the refractive elements of the eye undergo radical changes. Second, the immature visual system in young children puts them at risk of developing amblyopia if visual input is defocused or unequal between the two eyes. Third, the incidence of many complications, in which certain risks are acceptable in adults, is unacceptable in children. The optical correction of aphakia in children has changed dramatically however, accurate optical rehabilitation and postoperative supervision in pediatric cases is more difficult than adults. Treatment and optical rehabilitation in pediatric aphakic patients remains a challenge for ophthalmologists. The aim of this review is to cover issues regarding optical correction of pediatric aphakia in children; kinds of optical correction , indications, timing of intraocular lens (IOL) implantation, types of IOLs, site of implantation, IOL power calculations and selection, complications of IOL implantation in pediatric patients and finally to determine the preferred choice of optical correction. However treatment of pediatric aphakia is one step on the long road to visual rehabilitation, not the end of the journey. PMID:24982736

  6. A study of the acoustical radiation force considering attenuation

    NASA Astrophysics Data System (ADS)

    Wu, RongRong; Liu, XiaoZhou; Gong, XiuFen

    2013-07-01

    Acoustical tweezer is a primary application of the radiation force of a sound field. When an ultrasound focused beam passes through a micro-particle, like a cell or living biological specimens, the particle will be manipulated accurately without physical contact and invasion, due to the three-dimensional acoustical trapping force. Based on the Ray acoustics approach in the Mie regime, this work discusses the effects on the particle caused by Gaussian focused ultrasound, studies the acoustical trapping force of spherical Mie particles by ultrasound in any position, and analyzes the numerical calculation on the two-dimensional acoustical radiation force. This article also analyzes the conditions for the acoustical trapping phenomenon, and discusses the impact of the initial position and size of the particle on the magnitude of the acoustical radiation force. Furthermore, this paper considers the ultrasonic attenuation in a particle in the case of two-dimension, studies the attenuation's effects on the acoustical trapping force, and amends the calculation to the ordinary case with attenuation.

  7. X-ray attenuation properties of stainless steel (u)

    SciTech Connect

    Wang, Lily L; Berry, Phillip C

    2009-01-01

    Stainless steel vessels are used to enclose solid materials for studying x-ray radiolysis that involves gas release from the materials. Commercially available stainless steel components are easily adapted to form a static or a dynamic condition to monitor the gas evolved from the solid materials during and after the x-ray irradiation. Experimental data published on the x-ray attenuation properties of stainless steel, however, are very scarce, especially over a wide range of x-ray energies. The objective of this work was to obtain experimental data that will be used to determine how a poly-energetic x-ray beam is attenuated by the stainless steel container wall. The data will also be used in conjunction with MCNP (Monte Carlos Nuclear Particle) modeling to develop an accurate method for determining energy absorbed in known solid samples contained in stainless steel vessels. In this study, experiments to measure the attenuation properties of stainless steel were performed for a range of bremsstrahlung x-ray beams with a maximum energy ranging from 150 keV to 10 MeV. Bremsstrahlung x-ray beams of these energies are commonly used in radiography of engineering and weapon components. The weapon surveillance community has a great interest in understanding how the x-rays in radiography affect short-term and long-term properties of weapon materials.

  8. Fracture prediction using prestack Q calculation and attenuation anisotropy

    NASA Astrophysics Data System (ADS)

    An, Yong

    2015-09-01

    The analysis of fractured reservoirs is very important to hydrocarbon exploration. The quality factor Q is a parameter used to characterize the attenuation of seismic waves in subsurface media. Q not only reflects the inherent properties of the medium but also is used to make predictions regarding reservoir fractures. Compared with poststack seismic data, prestack seismic data contain detailed stratigraphic information of seismic attributes and data inversion in reservoirs. The extraction of absorption parameters from prestack data improves the accuracy of attenuation estimates. In this study, I present a new method for calculating Q based on the modified S transform (MST) using common midpoint (CMP) preprocessed gathers. First, I use the MST with adjustable time-frequency resolution to carry out a high-precision time-frequency analysis of prestack CMP gathers and derive the calculation formula for the improved S transform-based frequency spectrum ratio method. Then, I use the energy density ratio to calculate the slope of the frequency spectrum ratio instead of the conventional amplitude ratio. Thus, I establish the relation between the slope of the spectrum ratio and offset as well as eliminate the offset effect by multichannel linear fitting, obtaining accurate Q values from seismic prestack data. Finally, I use the proposed prestack Q extraction method to study the fractured reservoir in Qianjin burried hill and P-wave absorption and attenuation anisotropy with good results in the fracture characterization.

  9. Automatic correction of dental artifacts in PET/MRI.

    PubMed

    Ladefoged, Claes N; Andersen, Flemming L; Keller, Sune H; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-04-01

    A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 [Formula: see text]-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of [Formula: see text] of the artifact areas. PMID:26158104

  10. Automatic correction of dental artifacts in PET/MRI

    PubMed Central

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-01-01

    Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104

  11. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.

  12. Correction coil cable

    DOEpatents

    Wang, S.T.

    1994-11-01

    A wire cable assembly adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies for the Superconducting Super Collider. The correction coil cables have wires collected in wire array with a center rib sandwiched therebetween to form a core assembly. The core assembly is surrounded by an assembly housing having an inner spiral wrap and a counter wound outer spiral wrap. An alternate embodiment of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable on a particle tube in a particle tube assembly. 7 figs.

  13. Target Mass Corrections Revisited

    SciTech Connect

    W. Melnitchouk; F. Steffens

    2006-03-07

    We propose a new implementation of target mass corrections to nucleon structure functions which, unlike existing treatments, has the correct kinematic threshold behavior at finite Q{sup 2} in the x {yields} 1 limit. We illustrate the differences between the new approach and existing prescriptions by considering specific examples for the F{sub 2} and F{sub L} structure functions, and discuss the broader implications of our results, which call into question the notion of universal parton distribution at finite Q{sup 2}.

  14. Beam Hardening Corrections in Quantitative Computed Tomography

    SciTech Connect

    Vedula, Venumadhav; Venugopal, Manoharan; Raghu, C.; Pandey, Pramod

    2007-03-21

    Volumetric computed tomography (VCT) is the emerging 3D NDE inspection technique that gives highest throughput and better image quality. Industrial components in general demands higher x-ray energy for inspection for which polychromatic x-ray sources are used in common. Polychromatic nature of the x-rays gives rise to non-linear effects in the VCT projection data measurements called to be the beam hardening (BH) effects. BH produces prominent artifacts in the reconstructed images thereby deteriorating the image quality. Quantitative analysis such as density quantification, dimensional analysis etc., becomes difficult with the presence of these artifacts. This paper describes the BH correction using preprocessing technique for the homogeneous materials. Selection of effective energy at which the monoenergetic linear attenuation coefficient of a particular material equals to that of the polyenergetic beam is critical for BH correction. Various methods to determine the effective energy and their consequence in the quantitative measurements have been investigated in the present study. In this paper, BH corrections for heterogeneous materials have also been explored.

  15. Scatter corrections for cone beam optical CT

    NASA Astrophysics Data System (ADS)

    Olding, Tim; Holmes, Oliver; Schreiner, L. John

    2009-05-01

    Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.

  16. Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins

    PubMed Central

    Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso

    2016-01-01

    The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211

  17. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  18. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach. PMID:19163797

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  20. Second-order accurate finite volume method for well-driven flows

    NASA Astrophysics Data System (ADS)

    Dotlić, M.; Vidović, D.; Pokorni, B.; Pušić, M.; Dimkić, M.

    2016-02-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address the well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman model. Coupling this correction with a non-linear second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still inconsistent. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  1. Out-of-field activity in the estimation of mean lung attenuation coefficient in PET/MR

    NASA Astrophysics Data System (ADS)

    Berker, Yannick; Salomon, André; Kiessling, Fabian; Schulz, Volkmar

    2014-01-01

    In clinical PET/MR, photon attenuation is a source of potentially severe image artifacts. Correction approaches include those based on MR image segmentation, in which image voxels are classified and assigned predefined attenuation coefficients to obtain an attenuation map. In whole-body imaging, however, mean lung attenuation coefficients (LAC) can vary by a factor of 2, and the choice of inappropriate mean LAC can have significant impact on PET quantification. Previously, we proposed a method combining MR image segmentation, tissue classification and Maximum Likelihood reconstruction of Attenuation and Activity (MLAA) to estimate mean LAC values. In this work, we quantify the influence of out-of-field (OOF) accidental coincidences when acquiring data in a single bed position. We therefore carried out GATE simulations of realistic, whole-body activity and attenuation distributions derived from data of three patients. A bias of 15% was found and significantly reduced by removing OOF accidentals from our data, suggesting that OOF accidentals are the major contributor to the bias. We found approximately equal contributions from OOF scatter and OOF randoms, and present results after correction of the bias by rescaling of results. Results using temporal subsets suggest that 30-second acquisitions may be sufficient for estimation mean LAC with less than 5% uncertainty if mean bias can be corrected for.

  2. Nicotine-induced impulsive action: sensitization and attenuation by mecamylamine

    PubMed Central

    Kirshenbaum, Ari P.; Jackson, Eric R.; Brown, Seth J.; Fuchs, Jason R.; Miltner, Betsie C.; Doughty, Adam H.

    2011-01-01

    A conjunctive variable-interval differential-reinforcement-of-low-rate (VI-DRL, n= 18) responding schedule and a stop-signal task (n= 18) were used to evaluate the disinhibiting effects of nicotine on response withholding in rats. Sucrose solution was used to reinforce responding, and after a stable baseline was achieved under saline-administration conditions, 0.3 mg/kg nicotine was delivered before each session. Experiment 1 showed that repeated, but not the initial, administration of nicotine decreased performance on both tasks, and the effect of sensitization followed a similar timeline; 10 consecutive doses resulted in poorer proportion-correct VI-DRL trials and percent correct stop trials than the initial dose of nicotine. Furthermore, sensitization to 0.3 mg/kg nicotine decreased performance regardless of whether a spaced or consecutive-dosing regimen was followed. Experiment 2 was designed to test whether mecamylamine hydrochloride (0.1–1.0 mg/kg) could attenuate the effects of repeated 0.3 mg/kg nicotine administration, and the degree to which mecamylamine attenuation of the effect of nicotine to produce impulsive action was relative to dose. Results from experiment 2 showed that response disinhibition, as evaluated using the VI-DRL and stop-signal tasks, is related in a systematic manner to nicotinic-acetylcholine receptor activation. PMID:21448062

  3. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  4. Assessing aerobic natural attenuation of trichloroethene at four DOE sites

    SciTech Connect

    Koelsch, Michael C.; Starr, Robert C.; Sorenson, Jr., Kent S.

    2005-03-01

    A 3-year Department of Energy Environmental Science Management Program (EMSP) project is currently investigating natural attenuation of trichloroethane (TCE) in aerobic groundwater. This presentation summarizes the results of a screening process to identify TCE plumes at DOE facilities that are suitable for assessing the rate of TCE cometabolism under aerobic conditions. In order to estimate aerobic degradation rates, plumes had to meet the following criteria: TCE must be present in aerobic groundwater, a conservative co-contaminant must be present and have approximately the same source as TCE, and the groundwater velocity must be known. A total of 127 TCE plumes were considered across 24 DOE sites. The four sites retained for the assessment were: (1) Brookhaven National Laboratory, OU III; (2) Paducah Gaseous Diffusion Plant, Northwest Plume; (3) Rocky Flats Environmental Technology Site, Industrialized Area--Southwest Plume and 903 Pad South Plume; and (4) Savannah River Site, A/M Area Plume. For each of these sites, a co-contaminant derived from the same source area as TCE was used as a nonbiodegrading tracer. The tracer determined the extent to which concentration decreases in the plume can be accounted for solely by abiotic processes such as dispersion and dilution. Any concentration decreases not accounted for by these processes must be explained by some other natural attenuation mechanism. Thus, ''half-lives'' presented herein are in addition to attenuation that occurs due to hydrologic mechanisms. This ''tracer-corrected method'' has previously been used at the DOE's Idaho National Engineering and Environmental Laboratory in conjunction with other techniques to document the occurrence of intrinsic aerobic cometabolism. Application of this method to other DOE sites is the first step to determining whether this might be a significant natural attenuation mechanism on a broader scale. Application of the tracer-corrected method to data from the Brookhaven

  5. ASSESSING AEROBIC NATURAL ATTENUATION OF TRICHLOROETHENE AT FOUR DOE SITES

    SciTech Connect

    Michael C. Koelsch; Robert C. Starr; Kent S. Sorenson, Jr.

    2005-03-01

    A 3-year Department of Energy Environmental Science Management Program (EMSP) project is currently investigating natural attenuation of trichloroethane (TCE) in aerobic groundwater. This presentation summarizes the results of a screening process to identify TCE plumes at DOE facilities that are suitable for assessing the rate of TCE cometabolism under aerobic conditions. In order to estimate aerobic degradation rates, plumes had to meet the following criteria: TCE must be present in aerobic groundwater, a conservative co-contaminant must be present and have approximately the same source as TCE, and the groundwater velocity must be known. A total of 127 TCE plumes were considered across 24 DOE sites. The four sites retained for the assessment were: (1) Brookhaven National Laboratory, OU III; (2) Paducah Gaseous Diffusion Plant, Northwest Plume; (3) Rocky Flats Environmental Technology Site, Industrialized Area--Southwest Plume and 903 Pad South Plume; and (4) Savannah River Site, A/M Area Plume. For each of these sites, a co-contaminant derived from the same source area as TCE was used as a nonbiodegrading tracer. The tracer determined the extent to which concentration decreases in the plume can be accounted for solely by abiotic processes such as dispersion and dilution. Any concentration decreases not accounted for by these processes must be explained by some other natural attenuation mechanism. Thus, ''half-lives'' presented herein are in addition to attenuation that occurs due to hydrologic mechanisms. This ''tracer-corrected method'' has previously been used at the DOE's Idaho National Engineering and Environmental Laboratory in conjunction with other techniques to document the occurrence of intrinsic aerobic cometabolism. Application of this method to other DOE sites is the first step to determining whether this might be a significant natural attenuation mechanism on a broader scale. Application of the tracer-corrected method to data from the Brookhaven

  6. The Digital Correction Unit: A data correction/compaction chip

    SciTech Connect

    MacKenzie, S.; Nielsen, B.; Paffrath, L.; Russell, J.; Sherden, D.

    1986-10-01

    The Digital Correction Unit (DCU) is a semi-custom CMOS integrated circuit which corrects and compacts data for the SLD experiment. It performs a piece-wise linear correction to data, and implements two separate compaction algorithms. This paper describes the basic functionality of the DCU and its correction and compaction algorithms.

  7. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.

  8. Correction and Communicative Activity.

    ERIC Educational Resources Information Center

    Williams, Huw P.

    1980-01-01

    In classes where the communicative approach to language teaching is taken and where learners are asked to form groups in order to communicate, the teacher should be ready to respond to requests, give immediate correction, and use a monitoring sheet to note errors. The sheet can also be used for individual students. (PJM)

  9. Writing: Revisions and Corrections

    ERIC Educational Resources Information Center

    Kohl, Herb

    1978-01-01

    A fifth grader wanted to know what he had to do to get all his ideas the way he wanted them in his story writing "and" have the spelling, punctuation and quotation marks correctly styled. His teacher encouraged him to think about writing as a process and provided the student with three steps as guidelines for effective writing. (Author/RK)

  10. Counselor Education for Corrections.

    ERIC Educational Resources Information Center

    Parsigian, Linda

    Counselor education programs most often prepare their graduates to work in either a school setting, anywhere from the elementary level through higher education, or a community agency. There is little indication that counselor education programs have seriously undertaken the task of training counselors to enter the correctional field. If…

  11. Exposure Corrections for Macrophotography

    ERIC Educational Resources Information Center

    Nikolic, N. M.

    1976-01-01

    Describes a method for determining the exposure correction factors in close-up photography and macrophotography. The method eliminates all calculations during picture-taking, and allows the use of a light meter to obtain the proper f-stop/exposure time combinations. (Author/MLH)

  12. Calculation Of Pneumatic Attenuation In Pressure Sensors

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.

    1991-01-01

    Errors caused by attenuation of air-pressure waves in narrow tubes calculated by method based on fundamental equations of flow. Changes in ambient pressure transmitted along narrow tube to sensor. Attenuation of high-frequency components of pressure wave calculated from wave equation derived from Navier-Stokes equations of viscous flow in tube. Developed to understand and compensate for frictional attenuation in narrow tubes used to connect aircraft pressure sensors with pressure taps on affected surfaces.

  13. Improving attenuation predictions in heterogeneous porous media using laboratory data

    NASA Astrophysics Data System (ADS)

    Masson, Y.; Pride, S. R.

    2009-12-01

    Over the past decade, significant efforts have been made to acurately model the dispersion and attenuation of seismic waves propagating in heterogeneous porous media. Different analytical models such as patchy saturation, squirt flow, or double porosity as well as some numerical approaches have been proposed. All these approaches account for losses due to wave induced flow occurring at the mesoscopic scale; i.e., much smaller than the wavelength but greater than the grain size. Some models, such as squirt flow, can explain attenuation levels observed in lab experiments while others, like the double porosity model, are better at explaining the attenuation measured in the seismic band of frequencies. Numerical methods are more general and can be used in any frequency range. However to correctly use the predictive power of these modeling methods, it is crucial to have a good knowledge of the nature of the heterogeneities present within the propagating medium. Among other parameters such as the porosity or the permeability, it's important to have a good idea of the spatial distribution of the elastic properties within the propagating medium since the elastic moduli of the frame of grains determines how much the fluid pressure changes and how much mesoscopic flow occurs. These fluctuations in the elastic moduli in rocks remain largely unknown at the mesoscopic scale. To fill this gap, we developed a micro-indenter able to map the elastic moduli of rock samples with a sub millimetric resolution. Various maps of the elastic properties obtained from different rocks samples will be presented as well as the attenuation level estimated from these data.

  14. Quasars as very-accurate clock synchronizers

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Goldstein, R. M.

    1975-01-01

    Quasars can be employed to synchronize global data communications, geophysical measurements, and atomic clocks. It is potentially two to three orders of magnitude better than presently-used Moon-bounce system. Comparisons between quasar and clock pulses are used to develop correction or synchronization factors for station clocks.

  15. Gradient Artefact Correction and Evaluation of the EEG Recorded Simultaneously with fMRI Data Using Optimised Moving-Average.

    PubMed

    Ferreira, José L; Wu, Yan; Besseling, René M H; Lamerichs, Rolf; Aarts, Ronald M

    2016-01-01

    Over the past years, coregistered EEG-fMRI has emerged as a powerful tool for neurocognitive research and correlated studies, mainly because of the possibility of integrating the high temporal resolution of the EEG with the high spatial resolution of fMRI. However, additional work remains to be done in order to improve the quality of the EEG signal recorded simultaneously with fMRI data, in particular regarding the occurrence of the gradient artefact. We devised and presented in this paper a novel approach for gradient artefact correction based upon optimised moving-average filtering (OMA). OMA makes use of the iterative application of a moving-average filter, which allows estimation and cancellation of the gradient artefact by integration. Additionally, OMA is capable of performing the attenuation of the periodic artefact activity without accurate information about MRI triggers. By using our proposed approach, it is possible to achieve a better balance than the slice-average subtraction as performed by the established AAS method, regarding EEG signal preservation together with effective suppression of the gradient artefact. Since the stochastic nature of the EEG signal complicates the assessment of EEG preservation after application of the gradient artefact correction, we also propose a simple and effective method to account for it. PMID:27446943

  16. Gradient Artefact Correction and Evaluation of the EEG Recorded Simultaneously with fMRI Data Using Optimised Moving-Average

    PubMed Central

    Wu, Yan; Besseling, René M. H.; Lamerichs, Rolf; Aarts, Ronald M.

    2016-01-01

    Over the past years, coregistered EEG-fMRI has emerged as a powerful tool for neurocognitive research and correlated studies, mainly because of the possibility of integrating the high temporal resolution of the EEG with the high spatial resolution of fMRI. However, additional work remains to be done in order to improve the quality of the EEG signal recorded simultaneously with fMRI data, in particular regarding the occurrence of the gradient artefact. We devised and presented in this paper a novel approach for gradient artefact correction based upon optimised moving-average filtering (OMA). OMA makes use of the iterative application of a moving-average filter, which allows estimation and cancellation of the gradient artefact by integration. Additionally, OMA is capable of performing the attenuation of the periodic artefact activity without accurate information about MRI triggers. By using our proposed approach, it is possible to achieve a better balance than the slice-average subtraction as performed by the established AAS method, regarding EEG signal preservation together with effective suppression of the gradient artefact. Since the stochastic nature of the EEG signal complicates the assessment of EEG preservation after application of the gradient artefact correction, we also propose a simple and effective method to account for it. PMID:27446943

  17. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  18. Atmospheric attenuation calibrations of surface weather observations

    NASA Technical Reports Server (NTRS)

    Sanii, Babak

    2001-01-01

    A correlation between near-IR atmospheric attenuation measurements made by the Atmospheric Visibility Monitor (AVM) at the Table Mountain Facility and airport surface weather observations at Edwards Air Force Base has been performed. High correlations (over 0.93) exist between the simultaneous Edwards observed sky cover and the average AVM measured attenuations over the course of the 10 months analyzed. The statistical relationship between the data-sets allows the determination of coarse attenuation statistics from the surface observations, suggesting that such statistics may be extrapolated from any surface weather observation site, Furthermore, a superior technique for converting AVM images to attenuation values by way of MODTRAN predictions has been demonstrated.

  19. Differential dust attenuation in CALIFA galaxies

    NASA Astrophysics Data System (ADS)

    Vale Asari, N.; Cid Fernandes, R.; Amorim, A. L.; Lacerda, E. A. D.; Schlickmann, M.; Wild, V.; Kennicutt, R. C.

    2016-06-01

    Dust attenuation has long been treated as a simple parameter in SED fitting. Real galaxies are, however, much more complicated: The measured dust attenuation is not a simple function of the dust optical depth, but depends strongly on galaxy inclination and the relative distribution of stars and dust. We study the nebular and stellar dust attenuation in CALIFA galaxies, and propose some empirical recipes to make the dust treatment more realistic in spectral synthesis codes. By adding optical recombination emission lines, we find better constraints for differential attenuation. Those recipes can be applied to unresolved galaxy spectra, and lead to better recovered star formation rates.

  20. Informatics-based, highly accurate, noninvasive prenatal paternity testing

    PubMed Central

    Ryan, Allison; Baner, Johan; Demko, Zachary; Hill, Matthew; Sigurjonsson, Styrmir; Baird, Michael L.; Rabinowitz, Matthew

    2013-01-01

    Purpose: The aim of the study was to evaluate the diagnostic accuracy of an informatics-based, noninvasive, prenatal paternity test using array-based single-nucleotide polymorphism measurements of cell-free DNA isolated from maternal plasma. Methods: Blood samples were taken from 21 adult pregnant women (with gestational ages between 6 and 21 weeks), and a genetic sample was taken from the corresponding biological fathers. Paternity was confirmed by genetic testing of the infant, products of conception, control of fertilization, and/or preimplantation genetic diagnosis during in vitro fertilization. Parental DNA samples and maternal plasma cell-free DNA were amplified and analyzed using a HumanCytoSNP-12 array. An informatics-based method measured single-nucleotide polymorphism data, confirming or rejecting paternity. Each plasma sample with a sufficient fetal cell-free DNA fraction was independently tested against the confirmed father and 1,820 random, unrelated males. Results: One of the 21 samples had insufficient fetal cell-free DNA. The test correctly confirmed paternity for the remaining 20 samples (100%) when tested against the biological father, with P values of <10−4. For the 36,400 tests using an unrelated male as the alleged father, 99.95% (36,382) correctly excluded paternity and 0.05% (18) were indeterminate. There were no miscalls. Conclusion: A noninvasive paternity test using informatics-based analysis of single-nucleotide polymorphism array measurements accurately determined paternity early in pregnancy. PMID:23258349

  1. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  2. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  3. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  4. Accurate Determination of Membrane Dynamics with Line-Scan FCS

    PubMed Central

    Ries, Jonas; Chiantia, Salvatore; Schwille, Petra

    2009-01-01

    Here we present an efficient implementation of line-scan fluorescence correlation spectroscopy (i.e., one-dimensional spatio-temporal image correlation spectroscopy) using a commercial laser scanning microscope, which allows the accurate measurement of diffusion coefficients and concentrations in biological lipid membranes within seconds. Line-scan fluorescence correlation spectroscopy is a calibration-free technique. Therefore, it is insensitive to optical artifacts, saturation, or incorrect positioning of the laser focus. In addition, it is virtually unaffected by photobleaching. Correction schemes for residual inhomogeneities and depletion of fluorophores due to photobleaching extend the applicability of line-scan fluorescence correlation spectroscopy to more demanding systems. This technique enabled us to measure accurate diffusion coefficients and partition coefficients of fluorescent lipids in phase-separating supported bilayers of three commonly used raft-mimicking compositions. Furthermore, we probed the temperature dependence of the diffusion coefficient in several model membranes, and in human embryonic kidney cell membranes not affected by temperature-induced optical aberrations. PMID:19254560

  5. Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations

    SciTech Connect

    Baglietto, Emilio

    2006-07-01

    An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)

  6. Quality metric for accurate overlay control in <20nm nodes

    NASA Astrophysics Data System (ADS)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  7. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  8. Methods for correcting microwave scattering and emission measurements for atmospheric effects

    NASA Technical Reports Server (NTRS)

    Komen, M. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Algorithms were developed to permit correction of scattering coefficient and brightness temperature for the Skylab S193 Radscat for the effects of cloud attenuation. These algorithms depend upon a measurement of the vertically polarized excess brightness temperature at 50 deg incidence angle. This excess temperature is converted to an equivalent 50 deg attenuation, which may then be used to estimate the horizontally polarized excess brightness temperature and reduced scattering coefficient at 50 deg. For angles other than 50 deg, the correction also requires use of the variation of emissivity with salinity and water temperature.

  9. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  10. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  11. Empirical beam hardening correction (EBHC) for CT

    SciTech Connect

    Kyriakou, Yiannis; Meyer, Esther; Prell, Daniel; Kachelriess, Marc

    2010-10-15

    Purpose: Due to x-ray beam polychromaticity and scattered radiation, attenuation measurements tend to be underestimated. Cupping and beam hardening artifacts become apparent in the reconstructed CT images. If only one material such as water, for example, is present, these artifacts can be reduced by precorrecting the rawdata. Higher order beam hardening artifacts, as they result when a mixture of materials such as water and bone, or water and bone and iodine is present, require an iterative beam hardening correction where the image is segmented into different materials and those are forward projected to obtain new rawdata. Typically, the forward projection must correctly model the beam polychromaticity and account for all physical effects, including the energy dependence of the assumed materials in the patient, the detector response, and others. We propose a new algorithm that does not require any knowledge about spectra or attenuation coefficients and that does not need to be calibrated. The proposed method corrects beam hardening in single energy CT data. Methods: The only a priori knowledge entering EBHC is the segmentation of the object into different materials. Materials other than water are segmented from the original image, e.g., by using simple thresholding. Then, a (monochromatic) forward projection of these other materials is performed. The measured rawdata and the forward projected material-specific rawdata are monomially combined (e.g., multiplied or squared) and reconstructed to yield a set of correction volumes. These are then linearly combined and added to the original volume. The combination weights are determined to maximize the flatness of the new and corrected volume. EBHC is evaluated using data acquired with a modern cone-beam dual-source spiral CT scanner (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany), with a modern dual-source micro-CT scanner (TomoScope Synergy Twin, CT Imaging GmbH, Erlangen, Germany), and with a modern

  12. Licking Microstructure Reveals Rapid Attenuation of Neophobia

    PubMed Central

    Monk, Kevin J.; Rubin, Benjamin D.

    2014-01-01

    Many animals hesitate when initially consuming a novel food and increase their consumption of that food between the first and second sessions of access—a process termed attenuation of neophobia (AN). AN has received attention as a model of learning and memory; it has been suggested that plasticity resulting from an association of the novel tastant with “safe outcome” results in a change in the neural response to the tastant during the second session, such that consumption increases. Most studies have reported that AN emerges only an hour or more after the end of the first exposure to the tastant, consistent with what is known of learning-related plasticity. But these studies have typically measured consumption, rather than real-time behavior, and thus the possibility exists that a more rapidly developing AN remains to be discovered. Here, we tested this possibility, examining both consumption and individual lick times in a novel variant of a brief-access task (BAT). When quantified in terms of consumption, data from the BAT accorded well with the results of a classic one-bottle task—both revealed neophobia/AN specific to higher concentrations (for instance, 28mM) of saccharin. An analysis of licking microstructure, however, additionally revealed a real-time correlate of neophobia—an explicit tendency, similarly specific for 28-mM saccharin, to cut short the initial bout of licks in a single trial (compared with water). This relative hesitancy (i.e., the shortness of the first lick bout to 28-mM saccharin compared with water) that constitutes neophobia not only disappeared between sessions but also gradually declined in magnitude across session 1. These data demonstrate that the BAT accurately measures AN, and that aspects of AN—and the processes underlying familiarization—begin within minutes of the very first taste. PMID:24363269

  13. Underwing compression vortex attenuation device

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr. (Inventor)

    1993-01-01

    A vortex attenuation device is presented which dissipates a lift-induced vortex generated by a lifting aircraft wing. The device consists of a positive pressure gradient producing means in the form of a compression panel attached to the lower surface of the wing and facing perpendicular to the airflow across the wing. The panel is located between the midpoint of the local wing cord and the trailing edge in the chord-wise direction and at a point which is approximately 55 percent of the wing span as measured from the fuselage center line in the spanwise direction. When deployed in flight, this panel produces a positive pressure gradient aligned with the final roll-up of the total vortex system which interrupts the axial flow in the vortex core and causes the vortex to collapse.

  14. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2002-10-01

    RSI has access to two synthetic seismic programs: Osiris seismic modeling system provided by Odegaard (Osiris) and synthetic seismic program, developed by SRB, implementing the Kennett method for normal incidence. Achieving virtually identical synthetic seismic traces from these different programs serves as cross-validation for both. The subsequent experiments have been performed with the Kennett normal incidence code because: We have access to the source code, which allowed us to easily control computational parameters and integrate the synthetics computations with our graphical and I/O systems. This code allows to perform computations and displays on a PC in MatLab or Octave environment, which is faster and more convenient. The normal incidence model allows us to exclude from the synthetic traces some of the physical effects that take place in 3-D models (like inhomogeneous waves) but have no relevance to the topic of our investigation, which is attenuation effects on seismic reflection and transmission.

  15. Robust determination of mass attenuation coefficients of materials with unknown thickness and density

    NASA Astrophysics Data System (ADS)

    Kurudirek, M.; Medhat, M. E.

    2014-07-01

    An alternative approach is used to measure normalized mass attenuation coefficients (µ/ρ) of materials with unknown thickness and density. The adopted procedure is based on the use of simultaneous emission of Kα and Kβ X-ray lines as well as gamma peaks from radioactive sources in transmission geometry. 109Cd and 60Co radioactive sources were used for the purpose of the investigation. It has been observed that using the simultaneous X- and/or gamma rays of different energy allows accurate determination of relative mass attenuation coefficients by eliminating the dependence of µ/ρ on thickness and density of the material.

  16. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902

  17. Accuracies and Contrasts of Models of the Diffusion-Weighted-Dependent Attenuation of the MRI Signal at Intermediate b-values

    PubMed Central

    Nicolas, Renaud; Sibon, Igor; Hiba, Bassem

    2015-01-01

    The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm2 in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue. PMID:26106263

  18. Reanalysis of S-to-P amplitude ratios for gross attenuation structure, Long Valley caldera, California

    SciTech Connect

    Sanders, C.O.

    1993-12-01

    Because of the strong interest in the magmatism and volcanism at Long Valley caldera, eastern California, and because of recent sifnigicant improvements in our knowledge of the caldera velocity structure and earthquake locations, I have reanalyzed the local-earthquake S-to-P amplitude-ratio data of Sanders (1984) for the gross three-dimensional attenuation structure of the upper 10 km of Long Valley caldera. The primary goals of the analysis are to provide more accurate constraints on the depths of the attenuation anomalies using improved knowledge of the ray locations and an objective inversion procedure. The new image of the high S wave attenuation anomaly in the west-central cadlera suggests that the top of the principal anomaly is at 7-km depth, which is 2 km deeper than previously determined. Because of poor resolution in much of the region, some of the data remain unsatisfied by the final attenuation model. This unmodeled data may imply unresolved attenuation anomalies, perhaps small anomalies in the kilometer or two just above the central-caldera anomaly and perhaps a larger anomaly at about 7-km depth in the northwest caldera or somewhere beneath the Mono Craters. The central-caldera S wave attenuation anomaly has a location similar to mapped regions of low teleseismic P wave velocity, crustal inflation, reduced density, and aseismicity, strongly suggesting magmatic association.

  19. A blind deconvolution method for attenuative materials based on asymmetrical Gaussian model.

    PubMed

    Jin, Haoran; Chen, Jian; Yang, Keji

    2016-08-01

    During propagation in attenuative materials, ultrasonic waves are distorted by frequency-dependent acoustic attenuation. As a result, reference signals for blind deconvolution in attenuative materials are asymmetrical and should be accurately estimated by considering attenuation. In this study, an asymmetrical Gaussian model is established to estimate the reference signals from these materials, and a blind deconvolution method based on this model is proposed. Based on the symmetrical Gaussian model, the asymmetrical one is formulated by adding an asymmetrical coefficient. Upon establishing the model, the reference signal for blind deconvolution is determined via maximum likelihood estimation, and the blind deconvolution is implemented with an orthogonal matching pursuit algorithm. To verify the feasibility of the established model, spectra of ultrasonic signals from attenuative polyethylene plates with different thicknesses are measured and estimated. The proposed blind deconvolution method is applied to the A-scan signal and B-scan image from attenuative materials. Results demonstrate that the proposed method is capable of separating overlapping echoes and therefore achieves a high temporal resolution. PMID:27586747

  20. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  1. Applying Source and Path Corrections to Improve Discrimination in China,

    SciTech Connect

    Hartse, H. E.; Taylor, S. R.; Phillips, W. S.; Randall, G. E.

    1997-01-01

    Monitoring the Comprehensive Test Ban Treaty (CTBT) to magnitude levels below 4.0 will require use of regional seismic data recorded at distances of less than 2000 km. To improve regional discriminant performance we tested three different methods of correcting for path effects, and the third method includes a correction for source-scaling. We used regional recordings of broadband from stations in and near China. Our first method removes trends between phase ratios and physical parameters associated with each event-station path. This approach requires knowledge of the physical parameters along an event-station path, such as topography, basin thickness, and crustal thickness. Our second approach is somewhat more empirical. We examine spatial distributions of phase amplitudes after subtracting event magnitude and correcting for path distance. For a given station, phase, and frequency band, we grid and then smooth the magnitude-corrected and distance-corrected amplitudes to create a map representing a correction surface. We reference these maps to correct phase amplitudes prior to forming discrimination ratios. Our third approach is the most complicated, but also the most rigorous. For a given station and phase, we invert the spectra of a number of well-recorded earthquakes for source and path parameters. We then use the values obtained from the inversion to correct phase amplitudes for the effects of source size, distance, and attenuation. Finally,the amplitude residuals are gridded and smoothed to create a correction surface representing secondary path effects. We find that simple ratio- parameter corrections can improve discrimination performance along some paths (such as Kazakh Test Site (KTS) to WMQ), but for other paths (such as Lop Nor to AAK) the corrections are not beneficial. Our second method, the empirical path correction surfaces, improves discrimination performance for Lop Nor to AAK paths. Our third method, combined source and path corrections, has only

  2. Correction coil cable

    DOEpatents

    Wang, Sou-Tien

    1994-11-01

    A wire cable assembly (10, 310) adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies (532) for the superconducting super collider. The correction coil cables (10, 310) have wires (14, 314) collected in wire arrays (12, 312) with a center rib (16, 316) sandwiched therebetween to form a core assembly (18, 318 ). The core assembly (18, 318) is surrounded by an assembly housing (20, 320) having an inner spiral wrap (22, 322) and a counter wound outer spiral wrap (24, 324). An alternate embodiment (410) of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable (410) on a particle tube (733) in a particle tube assembly (732).

  3. CTI Correction Code

    NASA Astrophysics Data System (ADS)

    Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar

    2013-07-01

    Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

  4. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  5. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  6. Correction and updating.

    PubMed

    1994-03-01

    In the heading of David Cassidy's review of The Private Lives of Albert Einstein (18 February, p. 997) the price of the book as sold by its British publisher, Faber and Faber, was given incorrectly; the correct price is pound15.99. The book is also to be published in the United States by St. Martin's Press, New York, in April, at a price of $23.95. PMID:17817438

  7. Improving Earthquake-Explosion Discrimination using Attenuation Models of the Crust and Upper Mantle

    SciTech Connect

    Pasyanos, M E; Walter, W R; Matzel, E M; Rodgers, A J; Ford, S R; Gok, R; Sweeney, J J

    2009-07-06

    In the past year, we have made significant progress on developing and calibrating methodologies to improve earthquake-explosion discrimination using high-frequency regional P/S amplitude ratios. Closely-spaced earthquakes and explosions generally discriminate easily using this method, as demonstrated by recordings of explosions from test sites around the world. In relatively simple geophysical regions such as the continental parts of the Yellow Sea and Korean Peninsula (YSKP) we have successfully used a 1-D Magnitude and Distance Amplitude Correction methodology (1-D MDAC) to extend the regional P/S technique over large areas. However in tectonically complex regions such as the Middle East, or the mixed oceanic-continental paths for the YSKP the lateral variations in amplitudes are not well predicted by 1-D corrections and 1-D MDAC P/S discrimination over broad areas can perform poorly. We have developed a new technique to map 2-D attenuation structure in the crust and upper mantle. We retain the MDAC source model and geometrical spreading formulation and use the amplitudes of the four primary regional phases (Pn, Pg, Sn, Lg), to develop a simultaneous multi-phase approach to determine the P-wave and S-wave attenuation of the lithosphere. The methodology allows solving for attenuation structure in different depth layers. Here we show results for the P and S-wave attenuation in crust and upper mantle layers. When applied to the Middle East, we find variations in the attenuation quality factor Q that are consistent with the complex tectonics of the region. For example, provinces along the tectonically-active Tethys collision zone (e.g. Turkish Plateau, Zagros) have high attenuation in both the crust and upper mantle, while the stable outlying regions like the Indian Shield generally have low attenuation. In the Arabian Shield, however, we find that the low attenuation in this Precambrian crust is underlain by a high-attenuation upper mantle similar to the nearby Red

  8. Corrections to fundamental constants from photoelectric observations of lunar occultations

    NASA Astrophysics Data System (ADS)

    Rossello, G.

    1982-12-01

    A catalog of photoelectric occultations, which are more accurate than visual observations, is presented along with an analysis of the occultations intended to correct the FK4 stellar reference frame and lunar theory constants. A constant correction at the epoch 1969.0 of plus 0.87 plus or minus 0.06 to the FK4 system is consistent with those obtained by other authors, and the corrections to the semidiameter and parallactic inequality are in accord with values recently obtained by Morrison and Appleby (1981).

  9. A self-correcting procedure for computational liquid metal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Araseki, Hideo; Kotake, Shoji

    1994-02-01

    This paper describes a new application of the self-correcting procedure to computational liquid metal magnetohydrodynamics. In this procedure, the conservation law of the electric current density incorporated in a Poisson equation for the scalar potential plays an important role of correcting this potential. This role is similar to that of the conservation law of mass in a Poisson equation for the pressure. Some numerical results show that the proposed self-correcting procedure can provide a more accurate numerical solution of the electric current density than the existing solution procedure.

  10. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  11. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  12. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  13. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  14. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  15. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  16. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  17. Interventions to Correct Misinformation About Tobacco Products

    PubMed Central

    Cappella, Joseph N.; Maloney, Erin; Ophir, Yotam; Brennan, Emily

    2016-01-01

    In 2006, the U.S. District Court held that tobacco companies had “falsely and fraudulently” denied: tobacco causes lung cancer; environmental smoke endangers children’s respiratory systems; nicotine is highly addictive; low tar cigarettes were less harmful when they were not; they marketed to children; they manipulated nicotine delivery to enhance addiction; and they concealed and destroyed evidence to prevent accurate public knowledge. The courts required the tobacco companies to repair this misinformation. Several studies evaluated types of corrective statements (CS). We argue that most CS proposed (“simple CS’s”) will fall prey to “belief echoes” leaving affective remnants of the misinformation untouched while correcting underlying knowledge. Alternative forms for CS (“enhanced CS’s”) are proposed that include narrative forms, causal linkage, and emotional links to the receiver. PMID:27135046

  18. Prospects for aberration corrected electron precession.

    PubMed

    Own, C S; Sinkler, W; Marks, L D

    2007-01-01

    Recent developments in aberration control in the TEM have yielded a tremendous enhancement of direct imaging capabilities for studying atomic structures. However, aberration correction also has substantial benefits for achieving ultra-resolution in the TEM through reciprocal space techniques. Several tools are available that allow very accurate detection of the electron distribution in surfaces allowing precise atomic-scale characterization through statistical inversion techniques from diffraction data. The precession technique now appears to extend this capability to the bulk. This article covers some of the progress in this area and details requirements for a next-generation analytical diffraction instrument. An analysis of the contributions offered by aberration correction for precision electron precession is included. PMID:17207934

  19. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  20. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.