Attenuation correction of emission PET images with average CT: Interpolation from breath-hold CT
NASA Astrophysics Data System (ADS)
Huang, Tzung-Chi; Zhang, Geoffrey; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Wang, Shyh-Jen; Wu, Tung-Hsin
2011-05-01
Misregistration resulting from the difference of temporal resolution in PET and CT scans occur frequently in PET/CT imaging, which causes distortion in tumor quantification in PET. Respiration cine average CT (CACT) for PET attenuation correction has been reported to improve the misalignment effectively by several papers. However, the radiation dose to the patient from a four-dimensional CT scan is relatively high. In this study, we propose a method to interpolate respiratory CT images over a respiratory cycle from inhalation and exhalation breath-hold CT images, and use the average CT from the generated CT set for PET attenuation correction. The radiation dose to the patient is reduced using this method. Six cancer patients of various lesion sites underwent routine free-breath helical CT (HCT), respiration CACT, interpolated average CT (IACT), and 18F-FDG PET. Deformable image registration was used to interpolate the middle phases of a respiratory cycle based on the end-inspiration and end-expiration breath-hold CT scans. The average CT image was calculated from the eight interpolated CT image sets of middle respiratory phases and the two original inspiration and expiration CT images. Then the PET images were reconstructed by these three methods for attenuation correction using HCT, CACT, and IACT. Misalignment of PET image using either CACT or IACT for attenuation correction in PET/CT was improved. The difference in standard uptake value (SUV) from tumor in PET images was most significant between the use of HCT and CACT, while the least significant between the use of CACT and IACT. Besides the similar improvement in tumor quantification compared to the use of CACT, using IACT for PET attenuation correction reduces the radiation dose to the patient.
Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A
2014-12-01
Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob
2017-03-01
The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.
Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.
Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob
2017-03-21
The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.
Walimbe, Vivek; Shekhar, Raj
2006-12-01
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Bricault, Ivan; Ferretti, Gilbert
2005-01-01
While multislice spiral computed tomography (CT) scanners are provided by all major manufacturers, their specific interpolation algorithms have been rarely evaluated. Because the results published so far relate to distinct particular cases and differ significantly, there are contradictory recommendations about the choice of pitch in clinical practice. In this paper, we present a new tool for the evaluation of multislice spiral CT z-interpolation algorithms, and apply it to the four-slice case. Our software is based on the computation of a "Weighted Radiation Profile" (WRP), and compares WRP to an expected ideal profile in terms of widening and heterogeneity. It provides a unique scheme for analyzing a large variety of spiral CT acquisition procedures. Freely chosen parameters include: number of detector rows, detector collimation, nominal slice width, helical pitch, and interpolation algorithm with any filter shape and width. Moreover, it is possible to study any longitudinal and off-isocenter positions. Theoretical and experimental results show that WRP, more than Slice Sensitivity Profile (SSP), provides a comprehensive characterization of interpolation algorithms. WRP analysis demonstrates that commonly "preferred helical pitches" are actually nonoptimal regarding the formerly distinguished z-sampling gap reduction criterion. It is also shown that "narrow filter" interpolation algorithms do not enable a general preferred pitch discussion, since they present poor properties with large longitudinal and off-center variations. In the more stable case of "wide filter" interpolation algorithms, SSP width or WRP widening are shown to be almost constant. Therefore, optimal properties should no longer be sought in terms of these criteria. On the contrary, WRP heterogeneity is related to variable artifact phenomena and can pertinently characterize optimal pitches. In particular, the exemplary interpolation properties of pitch = 1 "wide filter" mode are demonstrated.
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
NASA Astrophysics Data System (ADS)
Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas
2017-03-01
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.
Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob
2010-02-01
Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.
Performance evaluations of demons and free form deformation algorithms for the liver region.
Wang, Hui; Gong, Guanzhong; Wang, Hongjun; Li, Dengwang; Yin, Yong; Lu, Jie
2014-04-01
We investigated the influence of breathing motion on radiation therapy according to four- dimensional computed tomography (4D-CT) technology and indicated the registration of 4D-CT images was significant. The demons algorithm in two interpolation modes was compared to the FFD model algorithm to register the different phase images of 4D-CT in tumor tracking, using iodipin as verification. Linear interpolation was used in both mode 1 and mode 2. Mode 1 set outside pixels to nearest pixel, while mode 2 set outside pixels to zero. We used normalized mutual information (NMI), sum of squared differences, modified Hausdorff-distance, and registration speed to evaluate the performance of each algorithm. The average NMI after demons registration method in mode 1 improved 1.76% and 4.75% when compared to mode 2 and FFD model algorithm, respectively. Further, the modified Hausdorff-distance was no different between demons modes 1 and 2, but mode 1 was 15.2% lower than FFD. Finally, demons algorithm has the absolute advantage in registration speed. The demons algorithm in mode 1 was therefore found to be much more suitable for the registration of 4D-CT images. The subtractions of floating images and reference image before and after registration by demons further verified that influence of breathing motion cannot be ignored and the demons registration method is feasible.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
Umehara, Kensuke; Ota, Junko; Ishida, Takayuki
2017-10-18
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy
2013-05-01
Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the PM dental charting protocols and the Interpol dental codes should be adapted accordingly. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Lu, Wenkai
2017-12-01
Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.
Relationship between noise, dose, and pitch in cardiac multi-detector row CT.
Primak, Andrew N; McCollough, Cynthia H; Bruesewitz, Michael R; Zhang, Jie; Fletcher, Joel G
2006-01-01
In spiral computed tomography (CT), dose is always inversely proportional to pitch. However, the relationship between noise and pitch (and hence noise and dose) depends on the scanner type (single vs multi-detector row) and reconstruction mode (cardiac vs noncardiac). In single detector row spiral CT, noise is independent of pitch. Conversely, in noncardiac multi-detector row CT, noise depends on pitch because the spiral interpolation algorithm makes use of redundant data from different detector rows to decrease noise for pitch values less than 1 (and increase noise for pitch values > 1). However, in cardiac spiral CT, redundant data cannot be used because such data averaging would degrade the temporal resolution. Therefore, the behavior of noise versus pitch returns to the single detector row paradigm, with noise being independent of pitch. Consequently, since faster rotation times require lower pitch values in cardiac multi-detector row CT, dose is increased without a commensurate decrease in noise. Thus, the use of faster rotation times will improve temporal resolution, not alter noise, and increase dose. For a particular application, the higher dose resulting from faster rotation speeds should be justified by the clinical benefits of the improved temporal resolution. RSNA, 2006
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry
2016-01-01
In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342
de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry
2016-08-01
In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.
Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.
2011-01-01
Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029
Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.
Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong
2017-11-01
Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.
Morsbach, Fabian; Bickelhaupt, Sebastian; Wanner, Guido A; Krauss, Andreas; Schmidt, Bernhard; Alkadhi, Hatem
2013-07-01
To assess the value of iterative frequency split-normalized (IFS) metal artifact reduction (MAR) for computed tomography (CT) of hip prostheses. This study had institutional review board and local ethics committee approval. First, a hip phantom with steel and titanium prostheses that had inlays of water, fat, and contrast media in the pelvis was used to optimize the IFS algorithm. Second, 41 consecutive patients with hip prostheses who were undergoing CT were included. Data sets were reconstructed with filtered back projection, the IFS algorithm, and a linear interpolation MAR algorithm. Two blinded, independent readers evaluated axial, coronal, and sagittal CT reformations for overall image quality, image quality of pelvic organs, and assessment of pelvic abnormalities. CT attenuation and image noise were measured. Statistical analysis included the Friedman test, Wilcoxon signed-rank test, and Levene test. Ex vivo experiments demonstrated an optimized IFS algorithm by using a threshold of 2200 HU with four iterations for both steel and titanium prostheses. Measurements of CT attenuation of the inlays were significantly (P < .001) more accurate for IFS when compared with filtered back projection. In patients, best overall and pelvic organ image quality was found in all reformations with IFS (P < .001). Pelvic abnormalities in 11 of 41 patients (27%) were diagnosed with significantly (P = .002) higher confidence on the basis of IFS images. CT attenuation of bladder (P < .001) and muscle (P = .043) was significantly less variable with IFS compared with filtered back projection and linear interpolation MAR. In comparison with that of FBP and linear interpolation MAR, noise with IFS was similar close to and far from the prosthesis (P = .295). The IFS algorithm for CT image reconstruction significantly reduces metal artifacts from hip prostheses, improves the reliability of CT number measurements, and improves the confidence for depicting pelvic abnormalities.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Deep learning methods for CT image-domain metal artifact reduction
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge
2017-09-01
Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
Generation and analysis of clinically relevant breast imaging x-ray spectra.
Hernandez, Andrew M; Seibert, J Anthony; Nosratieh, Anita; Boone, John M
2017-06-01
The purpose of this work was to develop and make available x-ray spectra for some of the most widely used digital mammography (DM), breast tomosynthesis (BT), and breast CT (bCT) systems in North America. The Monte Carlo code MCNP6 was used to simulate minimally filtered (only beryllium) x-ray spectra at 8 tube potentials from 20 to 49 kV for DM/BT, and 9 tube potentials from 35 to 70 kV for bCT. Vendor-specific anode compositions, effective anode angles, focal spot sizes, source-to-detector distances, and beryllium filtration were simulated. For each 0.5 keV energy bin in all simulated spectra, the fluence was interpolated using cubic splines across the range of simulated tube potentials to produce spectra in 1 kV increments from 20 to 49 kV for DM/BT and from 35 to 70 kV for bCT. The HVL of simulated spectra with conventional filtration (at 35 kV for DM/BT and 49 kV for bCT) was used to assess spectral differences resulting from variations in: (a) focal spot size (0.1 and 0.3 mm IEC), (b) solid angle at the detector (i.e., small and large FOV size), and (c) geometrical specifications for vendors that employ the same anode composition. Averaged across all DM/BT vendors, variations in focal spot and FOV size resulted in HVL differences of 2.2% and 0.9%, respectively. Comparing anode compositions separately, the HVL differences for Mo (GE, Siemens) and W (Hologic, Philips, and Siemens) spectra were 0.3% and 0.6%, respectively. Both the commercial Koning and prototype "Doheny" (UC Davis) bCT systems utilize W anodes with a 0.3 mm focal spot. Averaged across both bCT systems, variations in FOV size resulted in a 2.2% difference in HVL. In addition, the Koning spectrum was slightly harder than Doheny with a 4.2% difference in HVL. Therefore to reduce redundancy, a generic DM/BT system and a generic bCT system were used to generate the new spectra reported herein. The spectral models for application to DM/BT were dubbed the Molybdenum, Rhodium, and Tungsten Anode Spectral Models using Interpolating Cubic Splines (MASMICS M -T , RASMICS M-T , and TASMICS M-T ; subscript "M-T" indicating mammography and tomosynthesis). When compared against reference models (MASMIP M , RASMIP M , and TASMIP M ; subscript "M" indicating mammography), the new spectral models were in close agreement with mean differences of 1.3%, -1.3%, and -3.3%, respectively, across tube potential comparisons of 20, 30, and 40 kV with conventional filtration. TASMICS b CT -generated bCT spectra were also in close agreement with the reference TASMIP model with a mean difference of -0.8%, across tube potential comparisons of 35, 49, and 70 kV with 1.5 mm Al filtration. The Mo, Rh, and W anode spectra for application in DM and BT (MASMICS M-T , RASMICS M-T , and TASMICS M-T ) and the W anode spectra for bCT (TASMICS bCT ) as described in this study should be useful for individuals interested in modeling the performance of modern breast x-ray imaging systems including dual-energy mammography which extends to 49 kV. These new spectra are tabulated in spreadsheet form and are made available to any interested party. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei
2014-11-01
Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less
Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu
2006-02-28
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
NASA Astrophysics Data System (ADS)
Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang
2000-02-01
We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.
Automatic cable artifact removal for cardiac C-arm CT imaging
NASA Astrophysics Data System (ADS)
Haase, C.; Schäfer, D.; Kim, M.; Chen, S. J.; Carroll, J.; Eshuis, P.; Dössel, O.; Grass, M.
2014-03-01
Cardiac C-arm computed tomography (CT) imaging using interventional C-arm systems can be applied in various areas of interventional cardiology ranging from structural heart disease and electrophysiology interventions to valve procedures in hybrid operating rooms. In contrast to conventional CT systems, the reconstruction field of view (FOV) of C-arm systems is limited to a region of interest in cone-beam (along the patient axis) and fan-beam (in the transaxial plane) direction. Hence, highly X-ray opaque objects (e.g. cables from the interventional setup) outside the reconstruction field of view, yield streak artifacts in the reconstruction volume. To decrease the impact of these streaks a cable tracking approach on the 2D projection sequences with subsequent interpolation is applied. The proposed approach uses the fact that the projected position of objects outside the reconstruction volume depends strongly on the projection perspective. By tracking candidate points over multiple projections only objects outside the reconstruction volume are segmented in the projections. The method is quantitatively evaluated based on 30 simulated CT data sets. The 3D root mean square deviation to a reference image could be reduced for all cases by an average of 50 % (min 16 %, max 76 %). Image quality improvement is shown for clinical whole heart data sets acquired on an interventional C-arm system.
High order cell-centered scheme totally based on cell average
NASA Astrophysics Data System (ADS)
Liu, Ze-Yu; Cai, Qing-Dong
2018-05-01
This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.
Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria
2010-11-07
Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
Space-time interpolation of satellite winds in the tropics
NASA Astrophysics Data System (ADS)
Patoux, Jérôme; Levy, Gad
2013-09-01
A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
Reducing Interpolation Artifacts for Mutual Information Based Image Registration
Soleimani, H.; Khosravifard, M.A.
2011-01-01
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673
Interpolation of property-values between electron numbers is inconsistent with ensemble averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.
2016-06-28
In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less
Okizaki, Atsutaka; Nakayama, Michihiro; Nakajima, Kaori; Takahashi, Koji
2017-12-01
Positron emission tomography (PET) has become a useful and important technique in oncology. However, spatial resolution of PET is not high; therefore, small abnormalities can sometimes be overlooked with PET. To address this problem, we devised a novel algorithm, iterative modified bicubic interpolation method (IMBIM). IMBIM generates high resolution and -contrast image. The purpose of this study was to investigate the utility of IMBIM for clinical FDG positron emission tomography/X-ray computed tomography (PET/CT) imaging.We evaluated PET images from 1435 patients with malignant tumor and compared the contrast (uptake ratio of abnormal lesions to background) in high resolution image with the standard bicubic interpolation method (SBIM) and IMBIM. In addition to the contrast analysis, 340 out of 1435 patients were selected for visual evaluation by nuclear medicine physicians to investigate lesion detectability. Abnormal uptakes on the images were categorized as either absolutely abnormal or equivocal finding.The average of contrast with IMBIM was significantly higher than that with SBIM (P < .001). The improvements were prominent with large matrix sizes and small lesions. SBIM images showed abnormalities in 198 of 340 lesions (58.2%), while IMBIM indicated abnormalities in 312 (91.8%). There was statistically significant improvement in lesion detectability with IMBIM (P < .001).In conclusion, IMBIM generates high-resolution images with improved contrast and, therefore, may facilitate more accurate diagnoses in clinical practice. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, M; Baek, J
2016-06-15
Purpose: To investigate the slice direction dependent detectability in cone beam CT images with anatomical background. Methods: We generated 3D anatomical background images using breast anatomy model. To generate 3D breast anatomy, we filtered 3D Gaussian noise with a square root of 1/f{sup 3}, and then assigned the attenuation coefficient of glandular (0.8cm{sup −1}) and adipose (0.46 cm{sup −1}) tissues based on voxel values. Projections were acquired by forward projection, and quantum noise was added to the projection data. The projection data were reconstructed by FDK algorithm. We compared the detectability of a 3 mm spherical signal in the imagemore » reconstructed from four different backprojection Methods: Hanning weighted ramp filter with linear interpolation (RECON1), Hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON3), and ramp filter with Fourier interpolation (RECON4), respectively. We computed task SNR of the spherical signal in transverse and longitudinal planes using channelized Hotelling observer with Laguerre-Gauss channels. Results: Transverse plane has similar task SNR values for different backprojection methods, while longitudinal plane has a maximum task SNR value in RECON1. For all backprojection methods, longitudinal plane has higher task SNR than transverse plane. Conclusion: In this work, we investigated detectability for different slice direction in cone beam CT images with anatomical background. Longitudinal plane has a higher task SNR than transverse plane, and backprojection with hanning weighted ramp filter with linear interpolation method (i.e., RECON1) produced the highest task SNR among four different backprojection methods. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Programs(IITP-2015-R0346-15-1008) supervised by the IITP (Institute for Information & Communications Technology Promotion), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the MSIP (2015R1C1A1A01052268) and framework of international cooperation program managed by NRF (NRF-2015K2A1A2067635).« less
Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing
NASA Technical Reports Server (NTRS)
Schowengerdt, R.; Gray, S.; Park, S. K.
1984-01-01
Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.
DCT based interpolation filter for motion compensation in HEVC
NASA Astrophysics Data System (ADS)
Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin
2012-10-01
High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.
Li, Zeyu; Chen, Yimin; Zhao, Yan; Zhu, Lifeng; Lv, Shengqing; Lu, Jiahui
2016-08-01
The interpolation technique of computed tomography angiography (CTA) image provides the ability for 3D reconstruction, as well as reduces the detect cost and the amount of radiation. However, most of the image interpolation algorithms cannot take the automation and accuracy into account. This study provides a new edge matching interpolation algorithm based on wavelet decomposition of CTA. It includes mark, scale and calculation (MSC). Combining the real clinical image data, this study mainly introduces how to search for proportional factor and use the root mean square operator to find a mean value. Furthermore, we re- synthesize the high frequency and low frequency parts of the processed image by wavelet inverse operation, and get the final interpolation image. MSC can make up for the shortage of the conventional Computed Tomography (CT) and Magnetic Resonance Imaging(MRI) examination. The radiation absorption and the time to check through the proposed synthesized image were significantly reduced. In clinical application, it can help doctor to find hidden lesions in time. Simultaneously, the patients get less economic burden as well as less radiation exposure absorbed.
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
NASA Astrophysics Data System (ADS)
Valverde, Clodoaldo; Baseia, Basílio
2018-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
Processing of CT sinograms acquired using a VRX detector
NASA Astrophysics Data System (ADS)
Jordan, Lawrence M.; DiBianca, Frank A.; Zou, Ping; Laughter, Joseph S.; Zeman, Herbert D.
2000-04-01
A 'variable resolution x-ray detector' (VRX) capable of resolving beyond 100 cycles/main a single dimension has been proposed by DiBianca, et al. The use of detectors of this design for computed-tomography (CT) imaging requires novel preprocessing of data to correct for the detector's non- uniform imaging characteristics over its range of view. This paper describes algorithms developed specifically to adjust VRX data for varying magnification, source-to-detector range and beam obliquity and to sharpen reconstructions by deconvolving the ray impulse function. The preprocessing also incorporates nonlinear interpolation of VRX raw data into canonical CT sinogram formats.
TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhbardeh, A; Parekth, VS; Jacobs, MA
2015-06-15
Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were usedmore » in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was 63%(0.13 to 0.04;p<0.05). The Dice similarity was in breast 8%(0.91 to 0.99) and for prostate was 89%(0.01 to 0.90;p<0.05) Conclusion: Our 3D wavelet hybrid registration approach registered diverse breast and prostate data of different radiological images(MR/PET/CT) with a high accuracy.« less
Atlas-based whole-body segmentation of mice from low-contrast Micro-CT data.
Baiker, Martin; Milles, Julien; Dijkstra, Jouke; Henning, Tobias D; Weber, Axel W; Que, Ivo; Kaijzel, Eric L; Löwik, Clemens W G M; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-12-01
This paper presents a fully automated method for atlas-based whole-body segmentation in non-contrast-enhanced Micro-CT data of mice. The position and posture of mice in such studies may vary to a large extent, complicating data comparison in cross-sectional and follow-up studies. Moreover, Micro-CT typically yields only poor soft-tissue contrast for abdominal organs. To overcome these challenges, we propose a method that divides the problem into an atlas constrained registration based on high-contrast organs in Micro-CT (skeleton, lungs and skin), and a soft tissue approximation step for low-contrast organs. We first present a modification of the MOBY mouse atlas (Segars et al., 2004) by partitioning the skeleton into individual bones, by adding anatomically realistic joint types and by defining a hierarchical atlas tree description. The individual bones as well as the lungs of this adapted MOBY atlas are then registered one by one traversing the model tree hierarchy. To this end, we employ the Iterative Closest Point method and constrain the Degrees of Freedom of the local registration, dependent on the joint type and motion range. This atlas-based strategy renders the method highly robust to exceptionally large postural differences among scans and to moderate pathological bone deformations. The skin of the torso is registered by employing a novel method for matching distributions of geodesic distances locally, constrained by the registered skeleton. Because of the absence of image contrast between abdominal organs, they are interpolated from the atlas to the subject domain using Thin-Plate-Spline approximation, defined by correspondences on the already established registration of high-contrast structures (bones, lungs and skin). We extensively evaluate the proposed registration method, using 26 non-contrast-enhanced Micro-CT datasets of mice, and the skin registration and organ interpolation, using contrast-enhanced Micro-CT datasets of 15 mice. The posture and shape varied significantly among the animals and the data was acquired in vivo. After registration, the mean Euclidean distance was less than two voxel dimensions for the skeleton and the lungs respectively and less than one voxel dimension for the skin. Dice coefficients of volume overlap between manually segmented and interpolated skeleton and organs vary between 0.47+/-0.08 for the kidneys and 0.73+/-0.04 for the brain. These experiments demonstrate the method's effectiveness for overcoming exceptionally large variations in posture, yielding acceptable approximation accuracy even in the absence of soft-tissue contrast in in vivo Micro-CT data without requiring user initialization. Copyright 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, M; Yuan, Y; Lo, Y
Purpose: To develop a novel strategy to extract the lung tumor motion from cone beam CT (CBCT) projections by an active contour model with interpolated respiration learned from diaphragm motion. Methods: Tumor tracking on CBCT projections was accomplished with the templates derived from planning CT (pCT). There are three major steps in the proposed algorithm: 1) The pCT was modified to form two CT sets: a tumor removed pCT and a tumor only pCT, the respective digitally reconstructed radiographs DRRtr and DRRto following the same geometry of the CBCT projections were generated correspondingly. 2) The DRRtr was rigidly registered withmore » the CBCT projections on the frame-by-frame basis. Difference images between CBCT projections and the registered DRRtr were generated where the tumor visibility was appreciably enhanced. 3) An active contour method was applied to track the tumor motion on the tumor enhanced projections with DRRto as templates to initialize the tumor tracking while the respiratory motion was compensated for by interpolating the diaphragm motion estimated by our novel constrained linear regression approach. CBCT and pCT from five patients undergoing stereotactic body radiotherapy were included in addition to scans from a Quasar phantom programmed with known motion. Manual tumor tracking was performed on CBCT projections and was compared to the automatic tracking to evaluate the algorithm accuracy. Results: The phantom study showed that the error between the automatic tracking and the ground truth was within 0.2mm. For the patients the discrepancy between the calculation and the manual tracking was between 1.4 and 2.2 mm depending on the location and shape of the lung tumor. Similar patterns were observed in the frequency domain. Conclusion: The new algorithm demonstrated the feasibility to track the lung tumor from noisy CBCT projections, providing a potential solution to better motion management for lung radiation therapy.« less
Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.
2014-01-01
Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771
Stevensson, Baltzar; Edén, Mattias
2011-03-28
We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.
NASA Astrophysics Data System (ADS)
Kim, Juhye; Nam, Haewon; Lee, Rena
2015-07-01
CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.
Uncertainty in coal property valuation in West Virginia: A case study
Hohn, M.E.; McDowell, R.R.
2001-01-01
Interpolated grids of coal bed thickness are being considered for use in a proposed method for taxation of coal in the state of West Virginia (United States). To assess the origin and magnitude of possible inaccuracies in calculated coal tonnage, we used conditional simulation to generate equiprobable realizations of net coal thickness for two coals on a 7 1/2 min topographic quadrangle, and a third coal in a second quadrangle. Coals differed in average thickness and proportion of original coal that had been removed by erosion; all three coals crop out in the study area. Coal tonnage was calculated for each realization and for each interpolated grid for actual and artificial property parcels, and differences were summarized as graphs of percent difference between tonnage calculated from the grid and average tonnage from simulations. Coal in individual parcels was considered minable for valuation purposes if average thickness in each parcel exceeded 30 inches. Results of this study show that over 75% of the parcels are classified correctly as minable or unminable based on interpolation grids of coal bed thickness. Although between 80 and 90% of the tonnages differ by less than 20% between interpolated values and simulated values, a nonlinear conditional bias might exist in estimation of coal tonnage from interpolated thickness, such that tonnage is underestimated where coal is thin, and overestimated where coal is thick. The largest percent differences occur for parcels that are small in area, although because of the small quantities of coal in question, bias is small on an absolute scale for these parcels. For a given parcel size, maximum apparent overestimation of coal tonnage occurs in parcels with an average coal bed thickness near the minable cutoff of 30 in. Conditional bias in tonnage for parcels having a coal thickness exceeding the cutoff by 10 in. or more is constant for two of the three coals studied, and increases slightly with average thickness for the third coal. ?? 2001 International Association for Mathematical Geology.
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Objective Interpolation of Scatterometer Winds
NASA Technical Reports Server (NTRS)
Tang, Wenquing; Liu, W. Timothy
1996-01-01
Global wind fields are produced by successive corrections that use measurements by the European Remote Sensing Satellite (ERS-1) scatterometer. The methodology is described. The wind fields at 10-meter height provided by the European Center for Medium-Range Weather Forecasting (ECMWF) are used to initialize the interpolation process. The interpolated wind field product ERSI is evaluated in terms of its improvement over the initial guess field (ECMWF) and the bin-averaged ERS-1 wind field (ERSB). Spatial and temporal differences between ERSI, ECMWF and ERSB are presented and discussed.
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei
This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.
MR Image Based Approach for Metal Artifact Reduction in X-Ray CT
2013-01-01
For decades, computed tomography (CT) images have been widely used to discover valuable anatomical information. Metallic implants such as dental fillings cause severe streaking artifacts which significantly degrade the quality of CT images. In this paper, we propose a new method for metal-artifact reduction using complementary magnetic resonance (MR) images. The method exploits the possibilities which arise from the use of emergent trimodality systems. The proposed algorithm corrects reconstructed CT images. The projected data which is affected by dental fillings is detected and the missing projections are replaced with data obtained from a corresponding MR image. A simulation study was conducted in order to compare the reconstructed images with images reconstructed through linear interpolation, which is a common metal-artifact reduction technique. The results show that the proposed method is successful in reducing severe metal artifacts without introducing significant amount of secondary artifacts. PMID:24302860
Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan
NASA Astrophysics Data System (ADS)
Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung
2010-08-01
Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
An efficient interpolation filter VLSI architecture for HEVC standard
NASA Astrophysics Data System (ADS)
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
An automatic approach for 3D registration of CT scans
NASA Astrophysics Data System (ADS)
Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas
2012-03-01
CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
Lutkenhaus, Lotte J; Visser, Jorrit; de Jong, Rianne; Hulshof, Maarten C C M; Bel, Arjan
2015-07-01
To account for variable bladder size during bladder cancer radiotherapy, a daily plan selection strategy was implemented. The aim of this study was to calculate the actually delivered dose using an adaptive strategy, compared to a non-adaptive approach. Ten patients were treated to the bladder and lymph nodes with an adaptive full bladder strategy. Interpolated delineations of bladder and tumor on a full and empty bladder CT scan resulted in five PTVs for which VMAT plans were created. Daily cone beam CT (CBCT) scans were used for plan selection. Bowel, rectum and target volumes were delineated on these CBCTs, and delivered dose for these was calculated using both the adaptive plan, and a non-adaptive plan. Target coverage for lymph nodes improved using an adaptive strategy. The full bladder strategy spared the healthy part of the bladder from a high dose. Average bowel cavity V30Gy and V40Gy significantly reduced with 60 and 69ml, respectively (p<0.01). Other parameters for bowel and rectum remained unchanged. Daily plan selection compared to a non-adaptive strategy yielded similar bladder coverage and improved coverage for lymph nodes, with a significant reduction in bowel cavity V30Gy and V40Gy only, while other sparing was limited. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Investigating different computed tomography techniques for internal target volume definition.
Yoganathan, S A; Maria Das, K J; Subramanian, V Siva; Raj, D Gowtham; Agarwal, Arpita; Kumar, Shaleen
2017-01-01
The aim of this work was to evaluate the various computed tomography (CT) techniques such as fast CT, slow CT, breath-hold (BH) CT, full-fan cone beam CT (FF-CBCT), half-fan CBCT (HF-CBCT), and average CT for delineation of internal target volume (ITV). In addition, these ITVs were compared against four-dimensional CT (4DCT) ITVs. Three-dimensional target motion was simulated using dynamic thorax phantom with target insert of diameter 3 cm for ten respiration data. CT images were acquired using a commercially available multislice CT scanner, and the CBCT images were acquired using On-Board-Imager. Average CT was generated by averaging 10 phases of 4DCT. ITVs were delineated for each CT by contouring the volume of the target ball; 4DCT ITVs were generated by merging all 10 phases target volumes. Incase of BH-CT, ITV was derived by boolean of CT phases 0%, 50%, and fast CT target volumes. ITVs determined by all CT and CBCT scans were significantly smaller (P < 0.05) than the 4DCT ITV, whereas there was no significant difference between average CT and 4DCT ITVs (P = 0.17). Fast CT had the maximum deviation (-46.1% ± 20.9%) followed by slow CT (-34.3% ± 11.0%) and FF-CBCT scans (-26.3% ± 8.7%). However, HF-CBCT scans (-12.9% ± 4.4%) and BH-CT scans (-11.1% ± 8.5%) resulted in almost similar deviation. On the contrary, average CT had the least deviation (-4.7% ± 9.8%). When comparing with 4DCT, all the CT techniques underestimated ITV. In the absence of 4DCT, the HF-CBCT target volumes with appropriate margin may be a reasonable approach for defining the ITV.
Dynamic graphs, community detection, and Riemannian geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less
Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A
2009-11-07
Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
Micro-Computed Tomography Evaluation of Human Fat Grafts in Nude Mice
Chung, Michael T.; Hyun, Jeong S.; Lo, David D.; Montoro, Daniel T.; Hasegawa, Masakazu; Levi, Benjamin; Januszyk, Michael; Longaker, Michael T.
2013-01-01
Background Although autologous fat grafting has revolutionized the field of soft tissue reconstruction and augmentation, long-term maintenance of fat grafts is unpredictable. Recent studies have reported survival rates of fat grafts to vary anywhere between 10% and 80% over time. The present study evaluated the long-term viability of human fat grafts in a murine model using a novel imaging technique allowing for in vivo volumetric analysis. Methods Human fat grafts were prepared from lipoaspirate samples using the Coleman technique. Fat was injected subcutaneously into the scalp of 10 adult Crl:NU-Foxn1nu CD-1 male mice. Micro-computed tomography (CT) was performed immediately following injection and then weekly thereafter. Fat volume was rendered by reconstructing a three-dimensional (3D) surface through cubic-spline interpolation. Specimens were also harvested at various time points and sections were prepared and stained with hematoxylin and eosin (H&E), for macrophages using CD68 and for the cannabinoid receptor 1 (CB1). Finally, samples were explanted at 8- and 12-week time points to validate calculated micro-CT volumes. Results Weekly CT scanning demonstrated progressive volume loss over the time course. However, volumetric analysis at the 8- and 12-week time points stabilized, showing an average of 62.2% and 60.9% survival, respectively. Gross analysis showed the fat graft to be healthy and vascularized. H&E analysis and staining for CD68 showed minimal inflammatory reaction with viable adipocytes. Immunohistochemical staining with anti-human CB1 antibodies confirmed human origin of the adipocytes. Conclusions Studies assessing the fate of autologous fat grafts in animals have focused on nonimaging modalities, including histological and biochemical analyses, which require euthanasia of the animals. In this study, we have demonstrated the ability to employ micro-CT for 3D reconstruction and volumetric analysis of human fat grafts in a mouse model. Importantly, this model provides a platform for subsequent study of fat manipulation and soft tissue engineering. PMID:22916732
BEM-based simulation of lung respiratory deformation for CT-guided biopsy.
Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu
2017-09-01
Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.
Noninvasive coronary artery angiography using electron beam computed tomography
NASA Astrophysics Data System (ADS)
Rumberger, John A.; Rensing, Benno J.; Reed, Judd E.; Ritman, Erik L.; Sheedy, Patrick F., II
1996-04-01
Electron beam computed tomography (EBCT), also known as ultrafast-CT or cine-CT, uses a unique scanning architecture which allows for multiple high spatial resolution electrocardiographic triggered images of the beating heart. A recent study has demonstrated the feasibility of qualitative comparisons between EBCT derived 3D coronary angiograms and invasive angiography. Stenoses of the proximal portions of the left anterior descending and right coronary arteries were readily identified, but description of atherosclerotic narrowing in the left circumflex artery (and distal epicardial disease) was not possible with any degree of confidence. Although these preliminary studies support the notion that this approach has potential, the images overall were suboptimal for clinical application as an adjunct to invasive angiography. Furthermore, these studies did not examine different methods of EBCT scan acquisition, tomographic slice thicknesses, extent of scan overlap, or other segmentation, thresholding, and interpolation algorithms. Our laboratory has initiated investigation of these aspects and limitations of EBCT coronary angiography. Specific areas of research include defining effects of cardiac orientation; defining the effects of tomographic slice thickness and intensity (gradient) versus positional (shaped based) interpolation; and defining applicability of imaging each of the major epicardial coronary arteries for quantitative definition of vessel size, cross-sectional area, taper, and discrete vessel narrowing.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
NASA Astrophysics Data System (ADS)
Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.
2009-02-01
The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.
NASA Astrophysics Data System (ADS)
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-09-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
NASA Astrophysics Data System (ADS)
Felkins, Joseph; Holley, Adam
2017-09-01
Determining the average lifetime of a neutron gives information about the fundamental parameters of interactions resulting from the charged weak current. It is also an input for calculations of the abundance of light elements in the early cosmos, which are also directly measured. Experimentalists have devised two major approaches to measure the lifespan of the neutron, the beam experiment, and the bottle experiment. For the bottle experiment, I have designed a computational algorithm based on a numerical technique that interpolates magnetic field values in between measured points. This algorithm produces interpolated fields that satisfy the Maxwell-Heaviside equations for use in a simulation that will investigate the rate of depolarization in magnetic traps used for bottle experiments, such as the UCN τ experiment at Los Alamos National Lab. I will present how UCN depolarization can cause a systematic error in experiments like UCN τ. I will then describe the technique that I use for the interpolation, and will discuss the accuracy of interpolation for changes with the number of measured points and the volume of the interpolated region. Supported by NSF Grant 1553861.
NASA Astrophysics Data System (ADS)
Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao
2015-03-01
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
Sawicki, Lino M; Grueneisen, Johannes; Buchbender, Christian; Schaarschmidt, Benedikt M; Gomez, Benedikt; Ruhlmann, Verena; Umutlu, Lale; Antoch, Gerald; Heusch, Philipp
2016-01-01
The lower detection rate of (18)F-FDG PET/MRI than (18)F-FDG PET/CT regarding small lung nodules should be considered in the staging of malignant tumors. The purpose of this study was to evaluate the outcome of these small lung nodules missed by (18)F-FDG PET/MRI. Fifty-one oncologic patients (mean age ± SD, 56.6 ± 14.0 y; 29 women, 22 men; tumor stages, I [n = 7], II [n = 7], III [n = 9], IV [n = 28]) who underwent (18)F-FDG PET/CT and subsequent (18)F-FDG PET/MRI on the same day were retrospectively enrolled. Images were analyzed by 2 interpreters in random order and separate sessions with a minimum of 4 wk apart. A maximum of 10 lung nodules was identified for each patient on baseline imaging. The presence, size, and presence of focal tracer uptake was noted for each lung nodule detected on (18)F-FDG PET/CT and (18)F-FDG PET/MRI using a postcontrast T1-weighted 3-dimensional gradient echo volume-interpolated breath-hold examination sequence with fat suppression as morphologic dataset. Follow-up CT or (18)F-FDG PET/CT (mean time to follow-up, 11 mo; range, 3-35 mo) was used as a reference standard to define each missed nodule as benign or malignant based on changes in size and potential new tracer uptake. Nodule-to-nodule comparison between baseline and follow-up was performed using descriptive statistics. Out of 134 lung nodules found on (18)F-FDG PET/CT, (18)F-FDG PET/MRI detected 92 nodules. Accordingly, 42 lung nodules (average size ± SD, 3.9 ± 1.3 mm; range, 2-7 mm) were missed by (18)F-FDG PET/MRI. None of the missed lung nodules presented with focal tracer uptake on baseline imaging or follow-up (18)F-FDG PET/CT. Thirty-three out of 42 missed lung nodules (78.6%) in 26 patients were rated benign, whereas 9 nodules (21.4%) in 4 patients were rated malignant. As a result, 1 patient required upstaging from tumor stage I to IV. Although most small lung nodules missed on (18)F-FDG PET/MRI were found to be benign, there was a relevant number of undetected metastases. However, in patients with advanced tumor stages the clinical impact remains controversial as upstaging is usually more relevant in lower stages. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
3D temporal subtraction on multislice CT images using nonlinear warping technique
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
NASA Astrophysics Data System (ADS)
Perčec Tadić, M.
2010-09-01
The increased availability of satellite products of high spatial and temporal resolution together with developing user support, encourages the climatologists to use this data in research and practice. Since climatologists are mainly interested in monthly or even annual averages or aggregates, this high temporal resolution and hence, large amount of data, can be challenging for the less experienced users. Even if the attempt is made to aggregate e. g. the 15' (temporal) MODIS LST (land surface temperature) to daily temperature average, the development of the algorithm is not straight forward and should be done by the experts. Recent development of many temporary aggregated products on daily, several days or even monthly scale substantially decrease the amount of satellite data that needs to be processed and rise the possibility for development of various climatological applications. Here the attempt is presented in incorporating the MODIS satellite MOD11C3 product (Wan, 2009), that is monthly CMG (climate modelling 0.05 degree latitude/longitude grids) LST, as predictor in geostatistical interpolation of climatological data in Croatia. While in previous applications, e. g. in Climate Atlas of Croatia (Zaninović et al. 2008), the static predictors as digital elevation model, distance to the sea, latitude and longitude were used for the interpolation of monthly, seasonal and annual 30-years averages (reference climatology), here the monthly MOD11C3 is used to support the interpolation of the individual monthly average in the regression kriging framework. We believe that this can be a valuable show case of incorporating the remote sensed data for climatological application, especially in the areas that are under-sampled by conventional observations. Zaninović K, Gajić-Čapka M, Perčec Tadić M et al (2008) Klimatski atlas Hrvatske / Climate atlas of Croatia 1961-1990, 1971-2000. Meteorological and Hydrological Service of Croatia, Zagreb, pp 200. Wan Z, 2009: Collection-5 MODIS Land Surface Temperature Products Users' Guide, ICESS, University of California, Santa Barbara, pp 30.
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.
Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D
2011-08-01
The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.
2011-01-01
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen
2011-08-15
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less
NASA Astrophysics Data System (ADS)
Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki
2008-03-01
Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.
Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.
Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M
2015-10-01
Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.
Savi, Tadeja; Miotto, Andrea; Petruzzellis, Francesco; Losso, Adriano; Pacilè, Serena; Tromba, Giuliana; Mayr, Stefan; Nardini, Andrea
2017-11-01
Vulnerability curves (VCs) are a useful tool to investigate the susceptibility of plants to drought-induced hydraulic failure, and several experimental techniques have been used for their measurement. The validity of the bench dehydration method coupled to hydraulic measurements, considered as a 'golden standard', has been recently questioned calling for its validation with non-destructive methods. We compared the VCs of a herbaceous crop plant (Helianthus annuus) obtained during whole-plant dehydration followed by i) hydraulic flow measurements in stem segments (classical destructive method) or by ii) in vivo micro-CT observations of stem xylem conduits in intact plants. The interpolated P 50 values (xylem water potential inducing 50% loss of hydraulic conductance) were -1.74 MPa and -0.87 MPa for the hydraulic and the micro-CT VC, respectively. Interpolated P 20 values were similar, while P 50 and P 80 were significantly different, as evidenced by non-overlapping 95% confidence intervals. Our results did not support the tension-cutting artefact, as no overestimation of vulnerability was observed when comparing the hydraulic VC to that obtained with in vivo imaging. After one scan, 25% of plants showed signs of x-ray induced damage, while three successive scans caused the formation of a circular brownish scar in all tested plants. Our results support the validity of hydraulic measurements of samples excised under tension provided standard sampling and handling protocols are followed, but also show that caution is needed when investigating vital plant processes with x-ray imaging. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
A method for deriving a 4D-interpolated balanced planning target for mobile tumor radiotherapy.
Roland, Teboh; Hales, Russell; McNutt, Todd; Wong, John; Simari, Patricio; Tryggestad, Erik
2012-01-01
Tumor control and normal tissue toxicity are strongly correlated to the tumor and normal tissue volumes receiving high prescribed dose levels in the course of radiotherapy. Planning target definition is, therefore, crucial to ensure favorable clinical outcomes. This is especially important for stereotactic body radiation therapy of lung cancers, characterized by high fractional doses and steep dose gradients. The shift in recent years from population-based to patient-specific treatment margins, as facilitated by the emergence of 4D medical imaging capabilities, is a major improvement. The commonly used motion-encompassing, or internal-target volume (ITV), target definition approach provides a high likelihood of coverage for the mobile tumor but inevitably exposes healthy tissue to high prescribed dose levels. The goal of this work was to generate an interpolated balanced planning target that takes into account both tumor coverage and normal tissue sparing from high prescribed dose levels, thereby improving on the ITV approach. For each 4DCT dataset, 4D deformable image registration was used to derive two bounding targets, namely, a 4D-intersection and a 4D-composite target which minimized normal tissue exposure to high prescribed dose levels and maximized tumor coverage, respectively. Through definition of an "effective overlap volume histogram" the authors derived an "interpolated balanced planning target" intended to balance normal tissue sparing from prescribed doses with tumor coverage. To demonstrate the dosimetric efficacy of the interpolated balanced planning target, the authors performed 4D treatment planning based on deformable image registration of 4D-CT data for five previously treated lung cancer patients. Two 4D plans were generated per patient, one based on the interpolated balanced planning target and the other based on the conventional ITV target. Plans were compared for tumor coverage and the degree of normal tissue sparing resulting from the new approach was quantified. Analysis of the 4D dose distributions from all five patients showed that while achieving tumor coverage comparable to the ITV approach, the new planning target definition resulted in reductions of lung V(10), V(20), and V(30) of 6.3% ± 1.7%, 10.6% ± 3.9%, and 12.9% ± 5.5%, respectively, as well as reductions in mean lung dose, mean dose to the GTV-ring and mean heart dose of 8.8% ± 2.5%, 7.2% ± 2.5%, and 10.6% ± 3.6%, respectively. The authors have developed a simple and systematic approach to generate a 4D-interpolated balanced planning target volume that implicitly incorporates the dynamics of respiratory-organ motion without requiring 4D-dose computation or optimization. Preliminary results based on 4D-CT data of five previously treated lung patients showed that this new planning target approach may improve normal tissue sparing without sacrificing tumor coverage.
Bettens, Ryan P A
2003-01-15
Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.
Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael
2013-02-01
The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.
Azad Henareh Khalyani; William A. Gould; Eric Harmsen; Adam Terando; Maya Quinones; Jaime A. Collazo
2016-01-01
Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel
NASA Astrophysics Data System (ADS)
Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.
2015-01-01
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.
Sidek, Khairul Azami; Khalil, Ibrahim
2013-01-01
Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
GPU-based Branchless Distance-Driven Projection and Backprojection
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-01-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480
GPU-based Branchless Distance-Driven Projection and Backprojection.
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-12-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diot, Q; Kavanagh, B; Miften, M
2014-06-15
Purpose: To propose a quantitative method using lung deformations to differentiate between radiation-induced fibrosis and potential airway stenosis with distal atelectasis in patients treated with stereotactic body radiation therapy (SBRT) for lung tumors. Methods: Twenty-four lung patients with large radiation-induced density increases outside the high dose region had their pre- and post-treatment CT scans manually registered. They received SBRT treatments at our institution between 2002 and 2009 in 3 or 5 fractions, to a median total dose of 54Gy (range, 30–60). At least 50 anatomical landmarks inside the lung (airway branches) were paired for the pre- and post-treatment scans tomore » guide the deformable registration of the lung structure, which was then interpolated to the whole lung using splines. Local volume changes between the planning and follow-up scans were calculated using the deformation field Jacobian. Hyperdense regions were classified as atelectatic or fibrotic based on correlations between regional density increases and significant volume contractions compared to the surrounding tissues. Results: Out of 24 patients, only 7 demonstrated a volume contraction that was at least one σ larger than the remaining lung average. Because they did not receive high doses, these shrunk hyperdense regions were likely showing distal atelectasis resulting from radiation-induced airway stenosis rather than conventional fibrosis. On average, the hyperdense regions extended 9.2 cm farther than the GTV contours but not significantly more than 8.6 cm for the other patients (p>0.05), indicating that a large offset between the radiation and hyperdense region centers is not a good surrogate for atelectasis. Conclusion: A method based on the relative comparison of volume changes between different dates was developed to identify potential lung regions experiencing distal atelectasis. Such a tool is essential to study which lung structures need to be avoided to prevent atelectasis and limit lung function loss.« less
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
Interpolating precipitation and its relation to runoff and non-point source pollution.
Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L
2005-01-01
When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, J; Lopez, B; Mawlawi, O
2016-06-15
Purpose: To quantify the impact of 4D PET/CT on PERCIST metrics in lung and liver tumors in NSCLC and colorectal cancer patients. Methods: 32 patients presenting lung or liver tumors of 1–3 cm size affected by respiratory motion were scanned on a GE Discovery 690 PET/CT. The bed position with lesion(s) affected by motion was acquired in a 12 minute PET LIST mode and unlisted into 8 bins with respiratory gating. Three different CT maps were used for attenuation correction: a clinical helical CT (CT-clin), an average CT (CT-ave), and an 8-phase 4D CINE CT (CT-cine). All reconstructions were 3Dmore » OSEM, 2 iterations, 24 subsets, 6.4 Gaussian filtration, 192×192 matrix, non-TOF, and non-PSF. Reconstructions using CT-clin and CT-ave used only 3 out of the 12 minutes of the data (clinical protocol); all 12 minutes were used for the CT-cine reconstruction. The percent change of SUVbw-peak and SUVbw-max was calculated between PET-CTclin and PET-CTave. The same percent change was also calculated between PET-CTclin and PET-CTcine in each of the 8 bins and in the average of all bins. A 30% difference from PET-CTclin classified lesions as progressive metabolic disease (PMD) using maximum bin value and the average of eight bin values. Results: 30 lesions in 25 patients were evaluated. Using the bin with maximum SUVbw-peak and SUVbw-max difference, 4 and 13 lesions were classified as PMD, respectively. Using the average bin values for SUVbw-peak and SUVbw-max, 3 and 6 lesions were classified as PMD, respectively. Using PET-CTave values for SUVbw-peak and SUVbw-max, 4 and 3 lesions were classified as PMD, respectively. Conclusion: These results suggest that response evaluation in 4D PET/CT is dependent on SUV measurement (SUVpeak vs. SUVmax), number of bins (single or average), and the CT map used for attenuation correction.« less
Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A
2017-06-01
The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.
Anatomical decomposition in dual energy chest digital tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Choi, Seungyeon; Kim, Hee-Joung
2016-03-01
Lung cancer is the leading cause of cancer death worldwide and the early diagnosis of lung cancer has recently become more important. For early screening lung cancer, computed tomography (CT) has been used as a gold standard for early diagnosis of lung cancer [1]. The major advantage of CT is that it is not susceptible to the problem of misdiagnosis caused by anatomical overlapping while CT has extremely high radiation dose and cost compared to chest radiography. Chest digital tomosynthesis (CDT) is a recently introduced new modality for lung cancer screening with relatively low radiation dose compared to CT [2] and also showing high sensitivity and specificity to prevent anatomical overlapping occurred in chest radiography. Dual energy material decomposition method has been proposed for better detection of pulmonary nodules as means of reducing the anatomical noise [3]. In this study, possibility of material decomposition in CDT was tested by simulation study and actual experiment using prototype CDT. Furthermore organ absorbed dose and effective dose were compared with single energy CDT. The Gate v6 (Geant4 application for tomographic emission), and TASMIP (Tungsten anode spectral model using the interpolating polynomial) code were used for simulation study and simulated cylinder shape phantom consisted of 4 inner beads which were filled with spine, rib, muscle and lung equivalent materials. The patient dose was estimated by PCXMC 1.5 Monte Carlo simulation tool [4]. The tomosynthesis scan was performed with a linear movement and 21 projection images were obtained over 30 degree of angular range with 1.5° degree of angular interval. The proto type CDT system has same geometry with simulation study and composed of E7869X (Toshiba, Japan) x-ray tube and FDX3543RPW (Toshiba, Japan) detector. The result images showed that reconstructed with dual energy clearly visualize lung filed by removing unnecessary bony structure. Furthermore, dual energy CDT could enhance spine bone hidden by heart effectively. The effective dose in dual energy CDT was slightly higher than single energy CDT, while only 10% of average thoracic CT [5]. Dual energy tomosynthesis is a new technique; therefore, there is little guidance for its integration into the clinical practice and this study can be used to improve diagnosis efficiency of lung field screening using CDT
Spatial interpolation of solar global radiation
NASA Astrophysics Data System (ADS)
Lussana, C.; Uboldi, F.; Antoniazzi, C.
2010-09-01
Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.
NASA Astrophysics Data System (ADS)
Vogelsang, R.; Hoheisel, C.
1987-02-01
Molecular-dynamics (MD) calculations are reported for three thermodynamic states of a Lennard-Jones fluid. Systems of 2048 particles and 105 integration steps were used. The transverse current autocorrelation function, Ct(k,t), has been determined for wave vectors of the range 0.5<||k||σ<1.5. Ct(k,t) was fitted by hydrodynamic-type functions. The fits returned k-dependent decay times and shear viscosities which showed a systematic behavior as a function of k. Extrapolation to the hydrodynamic region at k=0 gave shear viscosity coefficients in good agreement with direct Green-Kubo results obtained in previous work. The two-exponential model fit for the memory function proposed by other authors does not provide a reasonable description of the MD results, as the fit parameters show no systematic wave-vector dependence, although the Ct(k,t) functions are somewhat better fitted. Similarly, the semiempirical interpolation formula for the decay time based on the viscoelastic concept proposed by Akcasu and Daniels fails to reproduce the correct k dependence for the wavelength range investigated herein.
A rigid motion correction method for helical computed tomography (CT)
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.
2015-03-01
We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.
In vitro evaluation of the imaging accuracy of C-arm conebeam CT in cerebral perfusion imaging
Ganguly, A.; Fieselmann, A.; Boese, J.; Rohkohl, C.; Hornegger, J.; Fahrig, R.
2012-01-01
Purpose: The authors have developed a method to enable cerebral perfusion CT imaging using C-arm based conebeam CT (CBCT). This allows intraprocedural monitoring of brain perfusion during treatment of stroke. Briefly, the technique consists of acquiring multiple scans (each scan comprised of six sweeps) acquired at different time delays with respect to the start of the x-ray contrast agent injection. The projections are then reconstructed into angular blocks and interpolated at desired time points. The authors have previously demonstrated its feasibility in vivo using an animal model. In this paper, the authors describe an in vitro technique to evaluate the accuracy of their method for measuring the relevant temporal signals. Methods: The authors’ evaluation method is based on the concept that any temporal signal can be represented by a Fourier series of weighted sinusoids. A sinusoidal phantom was developed by varying the concentration of iodine as successive steps of a sine wave. Each step corresponding to a different dilution of iodine contrast solution contained in partitions along a cylinder. By translating the phantom along the axis at different velocities, sinusoidal signals at different frequencies were generated. Using their image acquisition and reconstruction algorithm, these sinusoidal signals were imaged with a C-arm system and the 3D volumes were reconstructed. The average value in a slice was plotted as a function of time. The phantom was also imaged using a clinical CT system with 0.5 s rotation. C-arm CBCT results using 6, 3, 2, and 1 scan sequences were compared to those obtained using CT. Data were compared for linear velocities of the phantom ranging from 0.6 to 1 cm/s. This covers the temporal frequencies up to 0.16 Hz corresponding to a frequency range within which 99% of the spectral energy for all temporal signals in cerebral perfusion imaging is contained. Results: The errors in measurement of temporal frequencies are mostly below 2% for all multiscan sequences. For single scan sequences, the errors increase sharply beyond 0.10 Hz. The amplitude errors increase with frequency and with decrease in the number of scans used. Conclusions: Our multiscan perfusion CT approach allows low errors in signal frequency measurement. Increasing the number of scans reduces the amplitude errors. A two-scan sequence appears to offer the best compromise between accuracy and the associated total x-ray and iodine dose. PMID:23127059
Localization accuracy of sphere fiducials in computed tomography images
NASA Astrophysics Data System (ADS)
Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias
2014-03-01
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
Lee, Terrie M.; Fouad, Geoffrey G.
2014-01-01
In Florida’s karst terrain, where groundwater and surface waters interact, a mapping time series of the potentiometric surface in the Upper Floridan aquifer offers a versatile metric for assessing the hydrologic condition of both the aquifer and overlying streams and wetlands. Long-term groundwater monitoring data were used to generate a monthly time series of potentiometric surfaces in the Upper Floridan aquifer over a 573-square-mile area of west-central Florida between January 2000 and December 2009. Recorded groundwater elevations were collated for 260 groundwater monitoring wells in the Northern Tampa Bay area, and a continuous time series of daily observations was created for 197 of the wells by estimating missing daily values through regression relations with other monitoring wells. Kriging was used to interpolate the monthly average potentiometric-surface elevation in the Upper Floridan aquifer over a decade. The mapping time series gives spatial and temporal coherence to groundwater monitoring data collected continuously over the decade by three different organizations, but at various frequencies. Further, the mapping time series describes the potentiometric surface beneath parts of six regionally important stream watersheds and 11 municipal well fields that collectively withdraw about 90 million gallons per day from the Upper Floridan aquifer. Monthly semivariogram models were developed using monthly average groundwater levels at wells. Kriging was used to interpolate the monthly average potentiometric-surface elevations and to quantify the uncertainty in the interpolated elevations. Drawdown of the potentiometric surface within well fields was likely the cause of a characteristic decrease and then increase in the observed semivariance with increasing lag distance. This characteristic made use of the hole effect model appropriate for describing the monthly semivariograms and the interpolated surfaces. Spatial variance reflected in the monthly semivariograms decreased markedly between 2002 and 2003, timing that coincided with decreases in well-field pumping. Cross-validation results suggest that the kriging interpolation may smooth over the drawdown of the potentiometric surface near production wells. The groundwater monitoring network of 197 wells yielded an average kriging error in the potentiometric-surface elevations of 2 feet or less over approximately 70 percent of the map area. Additional data collection within the existing monitoring network of 260 wells and near selected well fields could reduce the error in individual months. Reducing the kriging error in other areas would require adding new monitoring wells. Potentiometric-surface elevations fluctuated by as much as 30 feet over the study period, and the spatially averaged elevation for the entire surface rose by about 2 feet over the decade. Monthly potentiometric-surface elevations describe the lateral groundwater flow patterns in the aquifer and are usable at a variety of spatial scales to describe vertical groundwater recharge and discharge conditions for overlying surface-water features.
Landmark-Based 3D Elastic Registration of Pre- and Postoperative Liver CT Data
NASA Astrophysics Data System (ADS)
Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.
The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate computer assisted surgical procedures. Due to deformations after surgery a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using landmarks at vessel branchings, we here introduce quasi landmarks at vessel segments with anisotropic localization precision. An experimental comparison of interpolating thin-plate splines (TPS) and Gaussian elastic body splines (GEBS) as well as approximating GEBS on both types of landmarks is performed.
NASA Astrophysics Data System (ADS)
He, Dianning; Zamora, Marta; Oto, Aytekin; Karczmar, Gregory S.; Fan, Xiaobing
2017-09-01
Differences between region-of-interest (ROI) and pixel-by-pixel analysis of dynamic contrast enhanced (DCE) MRI data were investigated in this study with computer simulations and pre-clinical experiments. ROIs were simulated with 10, 50, 100, 200, 400, and 800 different pixels. For each pixel, a contrast agent concentration as a function of time, C(t), was calculated using the Tofts DCE-MRI model with randomly generated physiological parameters (K trans and v e) and the Parker population arterial input function. The average C(t) for each ROI was calculated and then K trans and v e for the ROI was extracted. The simulations were run 100 times for each ROI with new K trans and v e generated. In addition, white Gaussian noise was added to C(t) with 3, 6, and 12 dB signal-to-noise ratios to each C(t). For pre-clinical experiments, Copenhagen rats (n = 6) with implanted prostate tumors in the hind limb were used in this study. The DCE-MRI data were acquired with a temporal resolution of ~5 s in a 4.7 T animal scanner, before, during, and after a bolus injection (<5 s) of Gd-DTPA for a total imaging duration of ~10 min. K trans and v e were calculated in two ways: (i) by fitting C(t) for each pixel, and then averaging the pixel values over the entire ROI, and (ii) by averaging C(t) over the entire ROI, and then fitting averaged C(t) to extract K trans and v e. The simulation results showed that in heterogeneous ROIs, the pixel-by-pixel averaged K trans was ~25% to ~50% larger (p < 0.01) than the ROI-averaged K trans. At higher noise levels, the pixel-averaged K trans was greater than the ‘true’ K trans, but the ROI-averaged K trans was lower than the ‘true’ K trans. The ROI-averaged K trans was closer to the true K trans than pixel-averaged K trans for high noise levels. In pre-clinical experiments, the pixel-by-pixel averaged K trans was ~15% larger than the ROI-averaged K trans. Overall, with the Tofts model, the extracted physiological parameters from the pixel-by-pixel averages were larger than the ROI averages. These differences were dependent on the heterogeneity of the ROI.
Simulation of spatiotemporal CT data sets using a 4D MRI-based lung motion model.
Marx, Mirko; Ehrhardt, Jan; Werner, René; Schlemmer, Heinz-Peter; Handels, Heinz
2014-05-01
Four-dimensional CT imaging is widely used to account for motion-related effects during radiotherapy planning of lung cancer patients. However, 4D CT often contains motion artifacts, cannot be used to measure motion variability, and leads to higher dose exposure. In this article, we propose using 4D MRI to acquire motion information for the radiotherapy planning process. From the 4D MRI images, we derive a time-continuous model of the average patient-specific respiratory motion, which is then applied to simulate 4D CT data based on a static 3D CT. The idea of the motion model is to represent the average lung motion over a respiratory cycle by cyclic B-spline curves. The model generation consists of motion field estimation in the 4D MRI data by nonlinear registration, assigning respiratory phases to the motion fields, and applying a B-spline approximation on a voxel-by-voxel basis to describe the average voxel motion over a breathing cycle. To simulate a patient-specific 4D CT based on a static CT of the patient, a multi-modal registration strategy is introduced to transfer the motion model from MRI to the static CT coordinates. Differences between model-based estimated and measured motion vectors are on average 1.39 mm for amplitude-based binning of the 4D MRI data of three patients. In addition, the MRI-to-CT registration strategy is shown to be suitable for the model transformation. The application of our 4D MRI-based motion model for simulating 4D CT images provides advantages over standard 4D CT (less motion artifacts, radiation-free). This makes it interesting for radiotherapy planning.
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
NASA Technical Reports Server (NTRS)
Dupuis, L. R.; Scoggins, J. R.
1979-01-01
Results of analyses revealed that nonlinear changes or differences formed centers or systems, that were mesosynoptic in nature. These systems correlated well in space with upper level short waves, frontal zones, and radar observed convection, and were very systematic in time and space. Many of the centers of differences were well established in the vertical, extending up to the tropopause. Statistical analysis showed that on the average nonlinear changes were larger in convective areas than nonconvective regions. Errors often exceeding 100 percent were made by assuming variables to change linearly through a 12-h period in areas of thunderstorms, indicating that these nonlinear changes are important in the development of severe weather. Linear changes, however, accounted for more and more of an observed change as the time interval (within the 12-h interpolation period) increased, implying that the accuracy of linear interpolation increased over larger time intervals.
Fine-granularity inference and estimations to network traffic for SDN.
Jiang, Dingde; Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.
Fine-granularity inference and estimations to network traffic for SDN
Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913
Neumann, Jan-Oliver; Giese, Henrik; Biller, Armin; Nagel, Armin M; Kiening, Karl
2015-01-01
Magnetic resonance imaging (MRI) is replacing computed tomography (CT) as the main imaging modality for stereotactic transformations. MRI is prone to spatial distortion artifacts, which can lead to inaccuracy in stereotactic procedures. Modern MRI systems provide distortion correction algorithms that may ameliorate this problem. This study investigates the different options of distortion correction using standard 1.5-, 3- and 7-tesla MRI scanners. A phantom was mounted on a stereotactic frame. One CT scan and three MRI scans were performed. At all three field strengths, two 3-dimensional sequences, volumetric interpolated breath-hold examination (VIBE) and magnetization-prepared rapid acquisition with gradient echo, were acquired, and automatic distortion correction was performed. Global stereotactic transformation of all 13 datasets was performed and two stereotactic planning workflows (MRI only vs. CT/MR image fusion) were subsequently analysed. Distortion correction on the 1.5- and 3-tesla scanners caused a considerable reduction in positional error. The effect was more pronounced when using the VIBE sequences. By using co-registration (CT/MR image fusion), even a lower positional error could be obtained. In ultra-high-field (7 T) MR imaging, distortion correction introduced even higher errors. However, the accuracy of non-corrected 7-tesla sequences was comparable to CT/MR image fusion 3-tesla imaging. MRI distortion correction algorithms can reduce positional errors by up to 60%. For stereotactic applications of utmost precision, we recommend a co-registration to an additional CT dataset. © 2015 S. Karger AG, Basel.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
Tri-linear interpolation-based cerebral white matter fiber imaging
Jiang, Shan; Zhang, Pengfei; Han, Tong; Liu, Weihua; Liu, Meixia
2013-01-01
Diffusion tensor imaging is a unique method to visualize white matter fibers three-dimensionally, non-invasively and in vivo, and therefore it is an important tool for observing and researching neural regeneration. Different diffusion tensor imaging-based fiber tracking methods have been already investigated, but making the computing faster, fiber tracking longer and smoother and the details shown clearer are needed to be improved for clinical applications. This study proposed a new fiber tracking strategy based on tri-linear interpolation. We selected a patient with acute infarction of the right basal ganglia and designed experiments based on either the tri-linear interpolation algorithm or tensorline algorithm. Fiber tracking in the same regions of interest (genu of the corpus callosum) was performed separately. The validity of the tri-linear interpolation algorithm was verified by quantitative analysis, and its feasibility in clinical diagnosis was confirmed by the contrast between tracking results and the disease condition of the patient as well as the actual brain anatomy. Statistical results showed that the maximum length and average length of the white matter fibers tracked by the tri-linear interpolation algorithm were significantly longer. The tracking images of the fibers indicated that this method can obtain smoother tracked fibers, more obvious orientation and clearer details. Tracking fiber abnormalities are in good agreement with the actual condition of patients, and tracking displayed fibers that passed though the corpus callosum, which was consistent with the anatomical structures of the brain. Therefore, the tri-linear interpolation algorithm can achieve a clear, anatomically correct and reliable tracking result. PMID:25206524
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun
2008-12-15
Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACTmore » from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma ({gamma}) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass {gamma} criteria. The average maximum and mean {gamma} indices were very low (well below 1), indicating good agreement between dose distributions. Increasing the cine duration generally increased the dose agreement. In the follow-up study, 49 of 50 patients had 100% of points within the PTV pass the {gamma} criteria. The average maximum and mean {gamma} indices were again well below 1, indicating good agreement. Dose calculation on RACT from cine CT is negligibly different from dose calculation on RACT from 4D-CT. Differences can be decreased further by increasing the cine duration of the cine CT scan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
NASA Astrophysics Data System (ADS)
Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.
2014-12-01
Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.
EDI OCT evaluation of choroidal thickness in Stargardt disease
Sodi, Andrea; Bacherini, Daniela; Caporossi, Orsola; Murro, Vittoria; Mucciolo, Dario Pasquale; Cipollini, Francesca; Passerini, Ilaria; Virgili, Gianni; Rizzo, Stanislao
2018-01-01
Purpose Choroidal thickness (CT) evaluation with EDI-OCT in Stargardt Disease (STGD), considering its possible association with some clinical features of the disease. Methods CT was evaluated in 41 STGD patients and in 70 controls. Measurements were performed in the subfoveal position and at 1000 μm nasally and temporally. CT average values in STGD and in the control group were first compared by means of Student’s T test. Then, the possible association between CT and some clinical features was evaluated by means of linear regression analysis. Considered clinical parameters were: age, age on onset, duration of the disease, visual acuity, foveal thickness, Fishman clinical phenotype, visual field loss and ERG response. Results Average CT was not significantly different between controls and STGD patients. In the STGD group the correlation between CT and age (r = 0.22, p = 0.033) and age of onset (r = 0.05, p = 0.424) was modest, while that of CT with disease duration (r = 0.30, p<0.001) was moderate. CT and foveal thickness were also significantly but modestly correlated (r = 0.15, p = 0.033). Conclusion In our series average CT is not significantly changed in STGD in comparison with the controls. Nevertheless a choroidal thinning may be identified in the more advanced stages of the disease. PMID:29304098
Chirindel, Alin; Adebahr, Sonja; Schuster, Daniel; Schimek-Jasch, Tanja; Schanne, Daniel H; Nemer, Ursula; Mix, Michael; Meyer, Philipp; Grosu, Anca-Ligia; Brunner, Thomas; Nestle, Ursula
2015-06-01
Evaluation of the effect of co-registered 4D-(18)FDG-PET/CT for SBRT target delineation in patients with central versus peripheral lung tumors. Analysis of internal target volume (ITV) delineation of central and peripheral lung lesions in 21 SBRT-patients. Manual delineation was performed by 4 observers in 2 contouring phases: on respiratory gated 4DCT with diagnostic 3DPET available aside (CT-ITV) and on co-registered 4DPET/CT (PET/CT-ITV). Comparative analysis of volumes and inter-reader agreement. 11 cases of peripheral and 10 central lesions were evaluated. In peripheral lesions, average CT-ITV was 6.2 cm(3) and PET/CT-ITV 8.6 cm(3), resembling a mean change in hypothetical radius of 2 mm. For both CT-ITVs and PET/CT-ITVs inter reader agreement was good and unchanged (0.733 and 0.716; p=0.58). All PET/CT-ITVs stayed within the PTVs derived from CT-ITVs. In central lesions, average CT-ITVs were 42.1 cm(3), PET/CT-ITVs 44.2 cm(3), without significant overall volume changes. Inter-reader agreement improved significantly (0.665 and 0.750; p<0.05). 2/10 PET/CT-ITVs exceeded the PTVs derived from CT-ITVs by >1 ml in average for all observers. The addition of co-registered 4DPET data to 4DCT based target volume delineation for SBRT of centrally located lung tumors increases the inter-observer agreement and may help to avoid geographic misses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Time Series of Greenland Ice-Sheet Elevations and Mass Changes from ICESat 2003-2009
NASA Astrophysics Data System (ADS)
Zwally, H. J.; Li, J.; Medley, B.; Robbins, J. W.; Yi, D.
2015-12-01
We follow the repeat-track analysis (RTA) of ICESat surface-elevation data by a second stage that adjusts the measured elevations on repeat passes to the reference track taking into account the cross-track slope (αc), in order to construct elevation time series. αc are obtained from RTA simultaneous solutions for αc, dh/dt, and h0. The height measurements on repeat tracks are initially interpolated to uniform along-track reference points (every 172 m) and times (ti) giving the h(xi,ti) used in the RTA solutions. The xi are the cross-track spacings from the reference track and i is the laser campaign index. The adjusted elevation measurements at the along-track reference points are hr(ti) = h(xi,ti) - xi tan(αc) - h0. The hr(ti) time series are averaged over 50 km cells creating H(ti) series and further averaged (weighted by cell area) to H(t) time series over drainage systems (DS), elevation bands, regions, and the entire ice sheet. Temperature-driven changes in the rate of firn compaction, CT(t), are calculated for 50 km cells with our firn-compaction model giving I(t) = H(t) - CT(t) - B(t) where B(t) is the vertical motion of the bedrock. During 2003 to 2009, the average dCT(t)/dt in the accumulation zone is -5 cm/yr, which amounts to a -75 km3/yr correction to ice volume change estimates. The I(t) are especially useful for studying the seasonal cycle of mass gains and losses and interannual variations. The H(t) for the ablation zone are fitted with a multi-variate function with a linear component describing the upward component of ice flow plus winter accumulation (fall through spring) and a portion of a sine function describing the superimposed summer melting. During fall to spring the H(t) indicate that the upward motion of the ice flow is at a rate of 1 m/yr, giving an annual mass gain of 180 Gt/yr in the ablation zone. The summer loss from surface melting in the high-melt summer of 2005 is 350 Gt/yr, giving a net surface loss of 170 Gt/yr from the ablation zone for 2005. During 2003-2008, the H(t) for the ablation zone show accelerations of the mass losses in the northwest DS8 and in the west-central DS7 (including Jacobshavn glacier) and offsetting decelerations of the mass losses in the east-central DS3 and southeast DS4, much of which occurred in 2008 possibly due to an eastward shift in the surface mass balance.
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, X
2016-06-15
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F
2008-09-01
lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods < 0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of "shape differences" was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
Rtop - an R package for interpolation along the stream network
NASA Astrophysics Data System (ADS)
Skøien, J. O.
2009-04-01
Rtop - an R package for interpolation along the stream network Geostatistical methods have been used to a limited extent for estimation along stream networks, with a few exceptions(Gottschalk, 1993; Gottschalk, et al., 2006; Sauquet, et al., 2000; Skøien, et al., 2006). Interpolation of runoff characteristics are more complicated than the traditional random variables estimated by geostatistical methods, as the measurements have a more complicated support, and many catchments are nested. Skøien et al. (2006) presented the model Top-kriging which takes these effects into account for interpolation of stream flow characteristics (exemplified by the 100 year flood). The method has here been implemented as a package in the statistical environment R (R Development Core Team, 2004). Taking advantage of the existing methods in R for working with spatial objects, and the extensive possibilities for visualizing the result, this makes it considerably easier to apply the method on new data sets, in comparison to earlier implementation of the method. Gottschalk, L. 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., I. Krasovskaia, E. Leblois, and E. Sauquet. 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Development Core Team. 2004. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Sauquet, E., L. Gottschalk, and E. Leblois. 2000. Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J. O., R. Merz, and G. Blöschl. 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
2D to 3D fusion of echocardiography and cardiac CT for TAVR and TAVI image guidance.
Khalil, Azira; Faisal, Amir; Lai, Khin Wee; Ng, Siew Cheok; Liew, Yih Miin
2017-08-01
This study proposed a registration framework to fuse 2D echocardiography images of the aortic valve with preoperative cardiac CT volume. The registration facilitates the fusion of CT and echocardiography to aid the diagnosis of aortic valve diseases and provide surgical guidance during transcatheter aortic valve replacement and implantation. The image registration framework consists of two major steps: temporal synchronization and spatial registration. Temporal synchronization allows time stamping of echocardiography time series data to identify frames that are at similar cardiac phase as the CT volume. Spatial registration is an intensity-based normalized mutual information method applied with pattern search optimization algorithm to produce an interpolated cardiac CT image that matches the echocardiography image. Our proposed registration method has been applied on the short-axis "Mercedes Benz" sign view of the aortic valve and long-axis parasternal view of echocardiography images from ten patients. The accuracy of our fully automated registration method was 0.81 ± 0.08 and 1.30 ± 0.13 mm in terms of Dice coefficient and Hausdorff distance for short-axis aortic valve view registration, whereas for long-axis parasternal view registration it was 0.79 ± 0.02 and 1.19 ± 0.11 mm, respectively. This accuracy is comparable to gold standard manual registration by expert. There was no significant difference in aortic annulus diameter measurement between the automatically and manually registered CT images. Without the use of optical tracking, we have shown the applicability of this technique for effective fusion of echocardiography with preoperative CT volume to potentially facilitate catheter-based surgery.
On the interpolation of volumetric water content in research catchments
NASA Astrophysics Data System (ADS)
Dlamini, Phesheya; Chaplot, Vincent
Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q; School of Nuclear Science and Technology, Hefei, Anhui; Anhui Medical University, Hefei, Anhui
Purpose: The purpose of this work was to develop a registration framework and method based on the software platform of ARTS-IGRT and implement in C++ based on ITK libraries to register CT images and CBCT images. ARTS-IGRT was a part of our self-developed accurate radiation planning system ARTS. Methods: Mutual information (MI) registration treated each voxel equally. Actually, different voxels even having same intensity should be treated differently in the registration procedure. According to their importance values calculated from self-information, a similarity measure was proposed which combined the spatial importance of a voxel with MI (S-MI). For lung registration, Firstly,more » a global alignment method was adopted to minimize the margin error and achieve the alignment of these two images on the whole. The result obtained at the low resolution level was then interpolated to become the initial conditions for the higher resolution computation. Secondly, a new similarity measurement S-MI was established to quantify how close the two input image volumes were to each other. Finally, Demons model was applied to compute the deformable map. Results: Registration tools were tested for head-neck and lung images and the average region was 128*128*49. The rigid registration took approximately 2 min and converged 10% faster than traditional MI algorithm, the accuracy reached 1mm for head-neck images. For lung images, the improved symmetric Demons registration process was completed in an average of 5 min using a 2.4GHz dual core CPU. Conclusion: A registration framework was developed to correct patient's setup according to register the planning CT volume data and the daily reconstructed 3D CBCT data. The experiments showed that the spatial MI algorithm can be adopted for head-neck images. The improved Demons deformable registration was more suitable to lung images, and rigid alignment should be applied before deformable registration to get more accurate result. Supported by National Natural Science Foundation of China (NO.81101132) and Natural Science Foundation of Anhui Province (NO.11040606Q55)« less
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Increasing the speed of medical image processing in MatLab®
Bister, M; Yap, CS; Ng, KH; Tok, CH
2007-01-01
MatLab® has often been considered an excellent environment for fast algorithm development but is generally perceived as slow and hence not fit for routine medical image processing, where large data sets are now available e.g., high-resolution CT image sets with typically hundreds of 512x512 slices. Yet, with proper programming practices – vectorization, pre-allocation and specialization – applications in MatLab® can run as fast as in C language. In this article, this point is illustrated with fast implementations of bilinear interpolation, watershed segmentation and volume rendering. PMID:21614269
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, M; Fan, T; Duan, J
2015-06-15
Purpose: Prospectively assess the potential utility of texture analysis for differentiation of central cancer from atelectasis. Methods: 0 consecutive central lung cancer patients who were referred for CT imaging and PET-CT were enrolled. Radiotherapy doctor delineate the tumor and atelectasis according to the fusion imaging based on CT image and PET-CT image. The texture parameters (such as energy, correlation, sum average, difference average, difference entropy), were obtained respectively to quantitatively discriminate tumor and atelectasis based on gray level co-occurrence matrix (GLCM) Results: The texture analysis results showed that the parameters of correlation and sum average had an obviously statistical significance(P<0.05).more » Conclusion: the results of this study indicate that texture analysis may be useful for the differentiation of central lung cancer and atelectasis.« less
The segmentation of bones in pelvic CT images based on extraction of key frames.
Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen
2018-05-22
Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.
Eiber, Matthias; Martinez-Möller, Axel; Souvatzoglou, Michael; Holzapfel, Konstantin; Pickhard, Anja; Löffelbein, Dennys; Santi, Ivan; Rummeny, Ernst J; Ziegler, Sibylle; Schwaiger, Markus; Nekolla, Stephan G; Beer, Ambros J
2011-09-01
In this study, the potential contribution of Dixon-based MR imaging with a rapid low-resolution breath-hold sequence, which is a technique used for MR-based attenuation correction (AC) for MR/positron emission tomography (PET), was evaluated for anatomical correlation of PET-positive lesions on a 3T clinical scanner compared to low-dose CT. This technique is also used in a recently installed fully integrated whole-body MR/PET system. Thirty-five patients routinely scheduled for oncological staging underwent (18)F-fluorodeoxyglucose (FDG) PET/CT and a 2-point Dixon 3-D volumetric interpolated breath-hold examination (VIBE) T1-weighted MR sequence on the same day. Two PET data sets reconstructed using attenuation maps from low-dose CT (PET(AC_CT)) or simulated MR-based segmentation (PET(AC_MR)) were evaluated for focal PET-positive lesions. The certainty for the correlation with anatomical structures was judged in the low-dose CT and Dixon-based MRI on a 4-point scale (0-3). In addition, the standardized uptake values (SUVs) for PET(AC_CT) and PET(AC_MR) were compared. Statistically, no significant difference could be found concerning anatomical localization for all 81 PET-positive lesions in low-dose CT compared to Dixon-based MR (mean 2.51 ± 0.85 and 2.37 ± 0.87, respectively; p = 0.1909). CT tended to be superior for small lymph nodes, bone metastases and pulmonary nodules, while Dixon-based MR proved advantageous for soft tissue pathologies like head/neck tumours and liver metastases. For the PET(AC_CT)- and PET(AC_MR)-based SUVs (mean 6.36 ± 4.47 and 6.31 ± 4.52, respectively) a nearly complete concordance with a highly significant correlation was found (r = 0.9975, p < 0.0001). Dixon-based MR imaging for MR AC allows for anatomical allocation of PET-positive lesions similar to low-dose CT in conventional PET/CT. Thus, this approach appears to be useful for future MR/PET for body regions not fully covered by diagnostic MRI due to potential time constraints.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Silkwood, Justin D; Matthews, Kenneth L; Shikhaliev, Polad M
2013-05-01
Photon counting spectral (PCS) computed tomography (CT) shows promise for breast imaging. An issue with current photon-counting detectors is low count rate capabilities, artifacts resulting from nonuniform count rate across the field of view, and suboptimal spectral information. These issues are addressed in part by using tissue-equivalent adaptive filtration of the x-ray beam. The purpose of the study was to investigate the effect of adaptive filtration on different aspects of PCS breast CT. The theoretical formulation for the filter shape was derived for different filter materials and evaluated by simulation and an experimental prototype of the filter was fabricated from a tissue-like material (acrylic). The PCS CT images of a glandular breast phantom with adipose and iodine contrast elements were simulated at 40, 60, 90, and 120 kVp tube voltages, with and without adaptive filter. The CT numbers, CT noise, and contrast-to-noise ratio (CNR) were compared for spectral CT images acquired with and without adaptive filters. Similar comparison was made for material-decomposed PCS CT images. The adaptive filter improved the uniformity of CT numbers, CT noise, and CNR in both ordinary and material decomposed PCS CT images. At the same tube output the average CT noise with adaptive filter, although uniform, was higher than the average noise without adaptive filter due to x-ray absorption by the filter. Increasing tube output, so that average skin exposure with the adaptive filter was same as without filter, made the noise with adaptive filter comparable to or lower than that without adaptive filter. Similar effects were observed when energy weighting was applied, and when material decompositions were performed using energy selective CT data. An adaptive filter decreases count rate requirements to the photon counting detectors which enables PCS breast CT based on commercially available detector technologies. Adaptive filter also improves image quality in PCS breast CT by decreasing beam hardening artifacts and by eliminating spatial nonuniformities of CT numbers, noise, and CNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siversson, Carl, E-mail: carl.siversson@med.lu.se; Nordström, Fredrik; Department of Radiation Physics, Skåne University Hospital, Lund 214 28
2015-10-15
Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whommore » a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties introduced by variations in patient geometry between imaging sessions.« less
GMI-IPS: Python Processing Software for Aircraft Campaigns
NASA Technical Reports Server (NTRS)
Damon, M. R.; Strode, S. A.; Steenrod, S. D.; Prather, M. J.
2018-01-01
NASA's Atmospheric Tomography Mission (ATom) seeks to understand the impact of anthropogenic air pollution on gases in the Earth's atmosphere. Four flight campaigns are being deployed on a seasonal basis to establish a continuous global-scale data set intended to improve the representation of chemically reactive gases in global atmospheric chemistry models. The Global Modeling Initiative (GMI), is creating chemical transport simulations on a global scale for each of the ATom flight campaigns. To meet the computational demands required to translate the GMI simulation data to grids associated with the flights from the ATom campaigns, the GMI ICARTT Processing Software (GMI-IPS) has been developed and is providing key functionality for data processing and analysis in this ongoing effort. The GMI-IPS is written in Python and provides computational kernels for data interpolation and visualization tasks on GMI simulation data. A key feature of the GMI-IPS, is its ability to read ICARTT files, a text-based file format for airborne instrument data, and extract the required flight information that defines regional and temporal grid parameters associated with an ATom flight. Perhaps most importantly, the GMI-IPS creates ICARTT files containing GMI simulated data, which are used in collaboration with ATom instrument teams and other modeling groups. The initial main task of the GMI-IPS is to interpolate GMI model data to the finer temporal resolution (1-10 seconds) of a given flight. The model data includes basic fields such as temperature and pressure, but the main focus of this effort is to provide species concentrations of chemical gases for ATom flights. The software, which uses parallel computation techniques for data intensive tasks, linearly interpolates each of the model fields to the time resolution of the flight. The temporally interpolated data is then saved to disk, and is used to create additional derived quantities. In order to translate the GMI model data to the spatial grid of the flight path as defined by the pressure, latitude, and longitude points at each flight time record, a weighted average is then calculated from the nearest neighbors in two dimensions (latitude, longitude). Using SciPya's Regular Grid Interpolator, interpolation functions are generated for the GMI model grid and the calculated weighted averages. The flight path points are then extracted from the ATom ICARTT instrument file, and are sent to the multi-dimensional interpolating functions to generate GMI field quantities along the spatial path of the flight. The interpolated field quantities are then written to a ICARTT data file, which is stored for further manipulation. The GMI-IPS is aware of a generic ATom ICARTT header format, containing basic information for all flight campaigns. The GMI-IPS includes logic to edit metadata for the derived field quantities, as well as modify the generic header data such as processing dates and associated instrument files. The ICARTT interpolated data is then appended to the modified header data, and the ICARTT processing is complete for the given flight and ready for collaboration. The output ICARTT data adheres to the ICARTT file format standards V1.1. The visualization component of the GMI-IPS uses Matplotlib extensively and has several functions ranging in complexity. First, it creates a model background curtain for the flight (time versus model eta levels) with the interpolated flight data superimposed on the curtain. Secondly, it creates a time-series plot of the interpolated flight data. Lastly, the visualization component creates averaged 2D model slices (longitude versus latitude) with overlaid flight track circles at key pressure levels. The GMI-IPS consists of a handful of classes and supporting functionality that have been generalized to be compatible with any ICARTT file that adheres to the base class definition. The base class represents a generic ICARTT entry, only defining a single time entry and 3D spatial positioning parameters. Other classes inherit from this base class; several classes for input ICARTT instrument files, which contain the necessary flight positioning information as a basis for data processing, as well as other classes for output ICARTT files, which contain the interpolated model data. Utility classes provide functionality for routine procedures such as: comparing field names among ICARTT files, reading ICARTT entries from a data file and storing them in data structures, and returning a reduced spatial grid based on a collection of ICARTT entries. Although the GMI-IPS is compatible with GMI model data, it can be adapted with reasonable effort for any simulation that creates Hierarchical Data Format (HDF) files. The same can be said of its adaptability to ICARTT files outside of the context of the ATom mission. The GMI-IPS contains just under 30,000 lines of code, eight classes, and a dozen drivers and utility programs. It is maintained with GIT source code management and has been used to deliver processed GMI model data for the ATom campaigns that have taken place to date.
The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System
NASA Astrophysics Data System (ADS)
Yang, Mao; Tian, Yantao; Yin, Xianghua
In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
Field Monitoring Shows Smaller Sediment Deficit to the Louisiana Coast
NASA Astrophysics Data System (ADS)
Sanks, K. M.; Shaw, J.
2017-12-01
Current reports suggest that the Louisiana Coast will undergo significant drowning due to high subsidence rates and low sediment supply. One report suggests that sediment supply is just 30% of the amount necessary to sustain the current land area (Blum & Roberts, 2009). A novel dataset (CRMS) put together by the USGS and Louisiana's Coastal Protection and Restoration Authority provides direct measurements of sediment accumulation, subsidence rates, and sediment characteristics along the Louisiana Coast over the past 10 years (Jankowski et al., 2017). By interpolating bulk density, percent organic matter, and vertical accretion rates across the coast (274 sites), a more accurate estimate of sediment accumulation, both organic and inorganic, can be determined. Preliminary interpolation shows that an average of 53 MT organic and 132 MT inorganic sediment accumulates on coastal marshes each year. Assuming an average 9 mm/yr subsidence rate (Nienhuis et al., 2017) and 3 mm/yr sea-level rise (Blum & Roberts, 2009), this accumulation results in only a 12 MT/yr, or 6.5%, sediment deficit. Assuming a fluvial sediment discharge of 205 MT/yr, 64% of sediment is being trapped on the delta top. Although the sediment load estimates (MT/yr) may be slightly liberal due to interpolation over water, the fraction sediment deficit is unlikely to significantly change. These results suggest that even if current subsidence rates and sea level rise do not change, the gap between accommodation and accumulation may not be as dire as previously thought.
Interpolation on the manifold of K component GMMs.
Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas
2015-12-01
Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.
Bukaveckas, P.A.; Likens, G.E.; Winter, T.C.; Buso, D.C.
1998-01-01
Calculation of chemical flux rates for streams requires integration of continuous measurements of discharge with discrete measurements of solute concentrations. We compared two commonly used methods for interpolating chemistry data (time-averaging and flow-weighting) to determine whether discrepancies between the two methods were large relative to other sources of error in estimating flux rates. Flux rates of dissolved Si and SO42- were calculated from 10 years of data (1981-1990) for the NW inlet and Outlet of Mirror Lake and for a 40-day period (March 22 to April 30, 1993) during which we augmented our routine (weekly) chemical monitoring with collection of daily samples. The time-averaging method yielded higher estimates of solute flux during high-flow periods if no chemistry samples were collected corresponding to peak discharge. Concentration-discharge relationships should be used to interpolate stream chemistry during changing flow conditions if chemical changes are large. Caution should be used in choosing the appropriate time-scale over which data are pooled to derive the concentration-discharge regressions because the model parameters (slope and intercept) were found to be sensitive to seasonal and inter-annual variation. Both methods approximated solute flux to within 2-10% for a range of solutes that were monitored during the intensive sampling period. Our results suggest that errors arising from interpolation of stream chemistry data are small compared with other sources of error in developing watershed mass balances.
A comparison of three methods to assess body composition.
Tewari, Nilanjana; Awad, Sherif; Macdonald, Ian A; Lobo, Dileep N
2018-03-01
The aim of this study was to compare the accuracy of measurements of body composition made using dual x-ray absorptiometry (DXA), analysis of computed tomography (CT) scans at the L3 vertebral level, and bioelectrical impedance analysis (BIA). DXA, CT, and BIA were performed in 47 patients recruited from two clinical trials investigating metabolic changes associated with major abdominal surgery or neoadjuvant chemotherapy for esophagogastric cancer. DXA was performed the week before surgery and before and after commencement of neoadjuvant chemotherapy. BIA was performed at the same time points and used with standard equations to calculate fat-free mass (FFM). Analysis of CT scans performed within 3 mo of the study was used to estimate FFM and fat mass (FM). There was good correlation between FM on DXA and CT (r 2 = 0.6632; P < 0.0001) and FFM on DXA and CT (r 2 = 0.7634; P < 0.0001), as well as FFM on DXA and BIA (r 2 = 0.6275; P < 0.0001). Correlation between FFM on CT and BIA also was significant (r 2 = 0.2742; P < 0.0001). On Bland-Altman analysis, average bias for FM on DXA and CT was 0.2564 with 95% limits of agreement (LOA) of -9.451 to 9.964. For FFM on DXA and CT, average bias was -0.1477, with LOA of -8.621 to 8.325. For FFM on DXA and BIA, average bias was -3.792, with LOA of -15.52 to 7.936. For FFM on CT and BIA, average bias was -2.661, with LOA of -22.71 to 17.39. Although a systematic error underestimating FFM was demonstrated with BIA, it may be a useful modality to quantify body composition in the clinical situation. Copyright © 2017 Elsevier Inc. All rights reserved.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2017-02-01
Chlorine disinfection of biologically treated wastewater is practiced in many locations prior to environmental discharge or beneficial reuse. The effectiveness of chlorine disinfection processes may be influenced by several factors, such as pH, temperature, ionic strength, organic carbon concentration, and suspended solids. We investigated the use of Bayesian multilayer perceptron (BMLP) models as efficient and practical tools for compiling and analysing free chlorine and monochloramine virus disinfection performance as a multivariate problem. Corresponding to their relative susceptibility, Adenovirus 2 was used to assess disinfection by monochloramine and Coxsackievirus B5 was used for free chlorine. A BMLP model was constructed to relate key disinfection conditions (CT, pH, turbidity) to observed Log Reduction Values (LRVs) for these viruses at constant temperature. The models proved to be valuable for incorporating uncertainty in the chlor(am)ination performance estimation and interpolating between operating conditions. Various types of queries could be performed with this model including the identification of target CT for a particular combination of LRV, pH and turbidity. Similarly, it was possible to derive achievable LRVs for combinations of CT, pH and turbidity. These queries yielded probability density functions for the target variable reflecting the uncertainty in the model parameters and variability of the input variables. The disinfection efficacy was greatly impacted by pH and to a lesser extent by turbidity for both types of disinfections. Non-linear relationships were observed between pH and target CT, and turbidity and target CT, with compound effects on target CT also evidenced. This work demonstrated that the use of BMLP models had considerable ability to improve the resolution and understanding of the multivariate relationships between operational parameters and disinfection outcomes for wastewater treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balasubramoniam, A; Bednarek, D; Rudin, S
Purpose: To create 4D parametric images using biplane Digital Subtraction Angiography (DSA) sequences co-registered with the 3D vascular geometry obtained from Cone Beam-CT (CBCT). Methods: We investigated a method to derive multiple 4D Parametric Imaging (PI) maps using only one CBCT acquisition. During this procedure a 3D-DSA geometry is stored and used subsequently for all 4D images. Each time a biplane DSA is acquired, we calculate 2D parametric maps of Bolus Arrival Time (BAT), Mean Transit Time (MTT) and Time to Peak (TTP). Arterial segments which are nearly parallel with one of the biplane imaging planes in the 2D parametricmore » maps are co-registered with the 3D geometry. The values in the remaining vascular network are found using spline interpolation since the points chosen for co-registration on the vasculature are discrete and remaining regions need to be interpolated. To evaluate the method we used a patient CT volume data set for 3D printing a neurovascular phantom containing a complete Circle of Willis. We connected the phantom to a flow loop with a peristaltic pump, simulating physiological flow conditions. Contrast media was injected with an automatic injector at 10 ml/sec. Images were acquired with a Toshiba Infinix C-arm and 4D parametric image maps of the vasculature were calculated. Results: 4D BAT, MTT, and TTP parametric image maps of the Circle of Willis were derived. We generated color-coded 3D geometries which avoided artifacts due to vessel overlap or foreshortening in the projection direction. Conclusion: The software was tested successfully and multiple 4D parametric images were obtained from biplane DSA sequences without the need to acquire additional 3D-DSA runs. This can benefit the patient by reducing the contrast media and the radiation dose normally associated with these procedures. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Patient dose estimation from CT scans at the Mexican National Neurology and Neurosurgery Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alva-Sánchez, Héctor, E-mail: halva@ciencias.unam.mx; Reynoso-Mejía, Alberto; Casares-Cruz, Katiuzka
In the radiology department of the Mexican National Institute of Neurology and Neurosurgery, a dedicated institute in Mexico City, on average 19.3 computed tomography (CT) examinations are performed daily on hospitalized patients for neurological disease diagnosis, control scans and follow-up imaging. The purpose of this work was to estimate the effective dose received by hospitalized patients who underwent a diagnostic CT scan using typical effective dose values for all CT types and to obtain the estimated effective dose distributions received by surgical and non-surgical patients. Effective patient doses were estimated from values per study type reported in the applications guidemore » provided by the scanner manufacturer. This retrospective study included all hospitalized patients who underwent a diagnostic CT scan between 1 January 2011 and 31 December 2012. A total of 8777 CT scans were performed in this two-year period. Simple brain scan was the CT type performed the most (74.3%) followed by contrasted brain scan (6.1%) and head angiotomography (5.7%). The average number of CT scans per patient was 2.83; the average effective dose per patient was 7.9 mSv; the mean estimated radiation dose was significantly higher for surgical (9.1 mSv) than non-surgical patients (6.0 mSv). Three percent of the patients had 10 or more brain CT scans and exceeded the organ radiation dose threshold set by the International Commission on Radiological Protection for deterministic effects of the eye-lens. Although radiation patient doses from CT scans were in general relatively low, 187 patients received a high effective dose (>20 mSv) and 3% might develop cataract from cumulative doses to the eye lens.« less
Patient dose estimation from CT scans at the Mexican National Neurology and Neurosurgery Institute
NASA Astrophysics Data System (ADS)
Alva-Sánchez, Héctor; Reynoso-Mejía, Alberto; Casares-Cruz, Katiuzka; Taboada-Barajas, Jesús
2014-11-01
In the radiology department of the Mexican National Institute of Neurology and Neurosurgery, a dedicated institute in Mexico City, on average 19.3 computed tomography (CT) examinations are performed daily on hospitalized patients for neurological disease diagnosis, control scans and follow-up imaging. The purpose of this work was to estimate the effective dose received by hospitalized patients who underwent a diagnostic CT scan using typical effective dose values for all CT types and to obtain the estimated effective dose distributions received by surgical and non-surgical patients. Effective patient doses were estimated from values per study type reported in the applications guide provided by the scanner manufacturer. This retrospective study included all hospitalized patients who underwent a diagnostic CT scan between 1 January 2011 and 31 December 2012. A total of 8777 CT scans were performed in this two-year period. Simple brain scan was the CT type performed the most (74.3%) followed by contrasted brain scan (6.1%) and head angiotomography (5.7%). The average number of CT scans per patient was 2.83; the average effective dose per patient was 7.9 mSv; the mean estimated radiation dose was significantly higher for surgical (9.1 mSv) than non-surgical patients (6.0 mSv). Three percent of the patients had 10 or more brain CT scans and exceeded the organ radiation dose threshold set by the International Commission on Radiological Protection for deterministic effects of the eye-lens. Although radiation patient doses from CT scans were in general relatively low, 187 patients received a high effective dose (>20 mSv) and 3% might develop cataract from cumulative doses to the eye lens.
Kawahara, Daisuke; Ozawa, Shuichi; Yokomachi, Kazushi; Tanaka, Sodai; Higaki, Toru; Fujioka, Chikako; Suzuki, Tatsuhiko; Tsuneda, Masato; Nakashima, Takeo; Ohno, Yoshimi; Nagata, Yasushi
2018-02-01
To evaluate the accuracy of raw-data-based effective atomic number (Z eff ) values and monochromatic CT numbers for contrast material of varying iodine concentrations, obtained using dual-energy CT. We used a tissue characterization phantom and varying concentrations of iodinated contrast medium. A comparison between the theoretical values of Z eff and that provided by the manufacturer was performed. The measured and theoretical monochromatic CT numbers at 40-130 keV were compared. The average difference between the Z eff values of lung (inhale) inserts in the tissue characterization phantom was 81.3% and the average Z eff difference was within 8.4%. The average difference between the Z eff values of the varying concentrations of iodinated contrast medium was within 11.2%. For the varying concentrations of iodinated contrast medium, the differences between the measured and theoretical monochromatic CT values increased with decreasing monochromatic energy. The Z eff and monochromatic CT numbers in the tissue characterization phantom were reasonably accurate. The accuracy of the raw-data-based Z eff values was higher than that of image-based Z eff values in the tissue-equivalent phantom. The accuracy of Z eff values in the contrast medium was in good agreement within the maximum SD found in the iodine concentration range of clinical dynamic CT imaging. Moreover, the optimum monochromatic energy for human tissue and iodinated contrast medium was found to be 70 keV. Advances in knowledge: The accuracy of the Z eff values and monochromatic CT numbers of the contrast medium created by raw-data-based, dual-energy CT could be sufficient in clinical conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Tyagi, N; Deasy, J
2015-06-15
Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated frommore » reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for improvements. This study is in part supported by NIH (U54CA137788 and U54CA132378)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
2015-06-15
Purpose: A method using four-dimensional(4D) PET/CT in design of radiation treatment planning was proposed and the target volume and radiation dose distribution changes relative to standard three-dimensional (3D) PET/CT were examined. Methods: A target deformable registration method was used by which the whole patient’s respiration process was considered and the effect of respiration motion was minimized when designing radiotherapy planning. The gross tumor volume of a non-small-cell lung cancer was contoured on the 4D FDG-PET/CT and 3D PET/CT scans by use of two different techniques: manual contouring by an experienced radiation oncologist using a predetermined protocol; another technique using amore » constant threshold of standardized uptake value (SUV) greater than 2.5. The target volume and radiotherapy dose distribution between VOL3D and VOL4D were analyzed. Results: For all phases, the average automatic and manually GTV volume was 18.61 cm3 (range, 16.39–22.03 cm3) and 31.29 cm3 (range, 30.11–35.55 cm3), respectively. The automatic and manually volume of merged IGTV were 27.82 cm3 and 49.37 cm3, respectively. For the manual contour, compared to 3D plan the mean dose for the left, right, and total lung of 4D plan have an average decrease 21.55%, 15.17% and 15.86%, respectively. The maximum dose of spinal cord has an average decrease 2.35%. For the automatic contour, the mean dose for the left, right, and total lung have an average decrease 23.48%, 16.84% and 17.44%, respectively. The maximum dose of spinal cord has an average decrease 1.68%. Conclusion: In comparison to 3D PET/CT, 4D PET/CT may better define the extent of moving tumors and reduce the contouring tumor volume thereby optimize radiation treatment planning for lung tumors.« less
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
NASA Technical Reports Server (NTRS)
Hovis, Jeffrey S.; Brundidge, Kenneth C.
1987-01-01
A method of interpolating atmospheric soundings while reducing the errors associated with simple time interpolation was developed. The purpose of this was to provide a means to determine atmospheric stability at times between standard soundings and to relate changes in stability to intensity changes in an MCC. Four MCC cases were chosen for study with this method with four stability indices being included. The discussion centers on three aspects for each stability parameter examined: the stability field in the vicinity of the storm and its changes in structure and magnitude during the lifetime of the storm, the average stability within the storm boundary as a function of time and its relation to storm intensity, and the apparent flux of stability parameter into the storm as a consequence of low-level storm relative flow. It was found that the results differed among the four stability parameters, sometimes in a conflicting fashion. Thus, an interpolation of how the storm intensity is related to the changing environmental stability depends upon the particular index utilized. Some explanation for this problem is offered.
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, R.
1991-01-01
An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Applicability of Hydrologic Modeling to Tactical Military Decision Making
1991-03-01
the continental United States. 111 DRAFT Table 4-4. Coefficient Ranges Location Range of Average Range of Average Ct Ct Cp Ct Appalachian 1.8-2.2 2.0... Mountainous --- 1.2 Watersheds Foothills --- 0.7 Areas Valley --- 0.4 Areas Eastern 0.4-1.0 0.8 0.5-1.0 0.8 Nebraska Corps of 0.4-8.0 0.3-0.9 --- Engineers...enemy to cover covert gorilla operations. b. Friendly Forces. Forces should be prepared to operate in a wet environment. c. Attachments and Detachments
Clinical evaluation of respiration-induced attenuation uncertainties in pulmonary 3D PET/CT.
Kruis, Matthijs F; van de Kamer, Jeroen B; Vogel, Wouter V; Belderbos, José Sa; Sonke, Jan-Jakob; van Herk, Marcel
2015-12-01
In contemporary positron emission tomography (PET)/computed tomography (CT) scanners, PET attenuation correction is performed by means of a CT-based attenuation map. Respiratory motion can however induce offsets between the PET and CT data. Studies have demonstrated that these offsets can cause errors in quantitative PET measures. The purpose of this study is to quantify the effects of respiration-induced CT differences on the attenuation correction of pulmonary 18-fluordeoxyglucose (FDG) 3D PET/CT in a patient population and to investigate contributing factors. For 32 lung cancer patients, 3D-CT, 4D-PET and 4D-CT data were acquired. The 4D FDG PET data were attenuation corrected (AC) using a free-breathing 3D-CT (3D-AC), the end-inspiration CT (EI-AC), the end-expiration CT (EE-AC) or phase-by-phase (P-AC). After reconstruction and AC, the 4D-PET data were averaged. In the 4Davg data, we measured maximum tumour standardised uptake value (SUV)max in the tumour, SUVmean in a lung volume of interest (VOI) and average SUV (SUVmean) in a muscle VOI. On the 4D-CT, we measured the lung volume differences and CT number changes between inhale and exhale in the lung VOI. Compared to P-AC, we found -2.3% (range -9.7% to 1.2%) lower tumour SUVmax in EI-AC and 2.0% (range -0.9% to 9.5%) higher SUVmax in EE-AC. No differences in the muscle SUV were found. The use of 3D-AC led to respiration-induced SUVmax differences up to 20% compared to the use of P-AC. SUVmean differences in the lung VOI between EI-AC and EE-AC correlated to average CT differences in this region (ρ = 0.83). SUVmax differences in the tumour correlated to the volume changes of the lungs (ρ = -0.55) and the motion amplitude of the tumour (ρ = 0.53), both as measured on the 4D-CT. Respiration-induced CT variations in clinical data can in extreme cases lead to SUV effects larger than 10% on PET attenuation correction. These differences were case specific and correlated to differences in CT number in the lungs.
Behavior of Compact Toroid Injected into C-2U Confinement Vessel
NASA Astrophysics Data System (ADS)
Matsumoto, Tadafumi; Roche, T.; Allrey, I.; Sekiguchi, J.; Asai, T.; Conroy, M.; Gota, H.; Granstedt, E.; Hooper, C.; Kinley, J.; Valentine, T.; Waggoner, W.; Binderbauer, M.; Tajima, T.; the TAE Team
2016-10-01
The compact toroid (CT) injector system has been developed for particle refueling on the C-2U device. A CT is formed by a magnetized coaxial plasma gun (MCPG) and the typical ejected CT/plasmoid parameters are as follows: average velocity 100 km/s, average electron density 1.9 ×1015 cm-3, electron temperature 30-40 eV, mass 12 μg . To refuel particles into FC plasma the CT must penetrate the transverse magnetic field that surrounds the FRC. The kinetic energy density of the CT should be higher than magnetic energy density of the axial magnetic field, i.e., ρv2 / 2 >=B2 / 2μ0 , where ρ, v, and B are mass density, velocity, and surrounded magnetic field, respectively. Also, the penetrated CT's trajectory is deflected by the transverse magnetic field (Bz 1 kG). Thus, we have to estimate CT's energy and track the CT trajectory inside the magnetic field, for which we adopted a fast-framing camera on C-2U: framing rate is up to 1.25 MHz for 120 frames. By employing the camera we clearly captured the CT/plasmoid trajectory. Comparisons between the fast-framing camera and some other diagnostics as well as CT injection results on C-2U will be presented.
Bergeron, Catherine; Fleet, Richard; Tounkara, Fatoumata Korika; Lavallée-Bourget, Isabelle; Turgeon-Pelchat, Catherine
2017-12-28
Rural emergency departments (EDs) are an important gateway to care for the 20% of Canadians who reside in rural areas. Less than 15% of Canadian rural EDs have access to a computed tomography (CT) scanner. We hypothesized that a significant proportion of inter-facility transfers from rural hospitals without CT scanners are for CT imaging. Our objective was to assess inter-facility transfers for CT imaging in a rural ED without a CT scanner. We selected a rural ED that offers 24/7 medical care with admission beds but no CT scanner. Descriptive statistics were collected from 2010 to 2015 on total ED visits and inter-facility transfers. Data was accessible through hospital and government databases. Between 2010 and 2014, there were respectively 13,531, 13,524, 13,827, 12,883, and 12,942 ED visits, with an average of 444 inter-facility transfers. An average of 33% (148/444) of inter-facility transfers were to a rural referral centre with a CT scan, with 84% being for CT scan. Inter-facility transfers incur costs and potential delays in patient diagnosis and management, yet current databases could not capture transfer times. Acquiring a CT scan may represent a reasonable opportunity for the selected rural hospital considering the number of required transfers.
Guerrisi, A; Marin, D; Laghi, A; Di Martino, M; Iafrate, F; Iannaccone, R; Catalano, C; Passariello, R
2010-08-01
The aim of this study was to assess the accuracy of translucency rendering (TR) in computed tomographic (CT) colonography without cathartic preparation using primary 3D reading. From 350 patients with 482 endoscopically verified polyps, 50 pathologically proven polyps and 50 pseudopolyps were retrospectively examined. For faecal tagging, all patients ingested 140 ml of orally administered iodinated contrast agent (diatrizoate meglumine and diatrizoate sodium) at meals 48 h prior to CT colonography examination and two h prior to scanning. CT colonography was performed using a 64-section CT scanner. Colonoscopy with segmental unblinding was performed within 2 weeks after CT. Three independent radiologists retrospectively evaluated TRCT clonographic images using a dedicated software package (V3D-Colon System). To enable size-dependent statistical analysis, lesions were stratified into the following size categories: small (< or =5 mm), intermediate (6-9 mm), and large (> or =10 mm). Overall average TR sensitivity for polyp characterisation was 96.6%, and overall average specificity for pseudopolyp characterisation was 91.3%. Overall average diagnostic accuracy (area under the curve) of TR for characterising colonic lesions was 0.97. TR is an accurate tool that facilitates interpretation of images obtained with a primary 3D analysis, thus enabling easy differentiation of polyps from pseudopolyps.
Average M shell fluorescence yields for elements with 70≤Z≤92
NASA Astrophysics Data System (ADS)
Kahoul, A.; Deghfel, B.; Aylikci, V.; Aylikci, N. K.; Nekkab, M.
2015-03-01
The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω¯M ) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement was typically obtained between our result and other works.
Bayat, J; Hashemi, S H; Khoshbakht, K; Deihimfard, R; Shahbazi, A; Momeni-Vesalian, R
2015-07-01
Soil samples at two depths were collected and analyzed to determine the concentrations of 16 polycyclic aromatic hydrocarbons (PAHs), organic carbon, and soil pH. The Σ16PAHs were 0.13 to 3.92 mg kg(-1) at depth 1 and 0.21 to 50.32 mg kg(-1)at depth 2. The averages of the PAH compounds indicate that the area is contaminated with oil, and this pollution was greater at depth 2. Interpolation maps showed that the southern region, especially at depth 2, has been contaminated more by anthropogenic activity. The diagnostic ratios indicate several sources of pollution of the agricultural soil. A comparison of average PAHs and standard values revealed that higher molecular weight compounds in the topsoil (InP and BghiP) and subsoil (BaA, BkF, BaP, DBA, and BghiP) exceed standard values for farmland. The pH interpolation map for both depths showed that most of the area has alkaline soil from long-term irrigation with untreated urban wastewater.
Application of spatial methods to identify areas with lime requirement in eastern Croatia
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo
2016-04-01
With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary agrotechnical measure for soil conditioning. The results show that ordinary kriging was most accurate univariate interpolation method with smallest error (RMSE) in all four data sets, while the least precise showed Radial Basis Functions (Thin Plate Spline and Inverse Multiquadratic). Furthermore, it is noticeable a trend of increasing errors (RMSE) with a reduced number of samples tested on the most accurate univariate interpolation model: 3,096 (Y1 100%), 3,258 (Y1 85%), 3,317 (Y1 70%), 3,546 (Y1 50%). The best-fit semivariograms show a strong spatial dependence in Y1 100% (Nugget/Sill 20.19) and Y1 85% (Nugget/Sill 23.83), while a further reduction of the number of samples resulted with moderate spatial dependence (Y1 70% -35,85% and Y1 50% - 32,01). Co-kriging method resulted in a reduction in RMSE compared with univariate interpolation methods for each data set with: 2,054, 1,731 and 1,734 for Y1 85%, Y1 70%, Y1 50%, respectively. The results show the possibility for reducing sampling costs by using co-kriging method which is useful from the practical viewpoint. Reduced number of samples by half for determination of hydrolytic acidity in the interaction with the soil pH provides a higher precision for variable liming compared to the univariate interpolation methods of the entire set of data. These data provide new opportunities to reduce costs in the practical plant production in Croatia.
Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.
Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng
2014-01-01
Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.
Automated Movement Correction for Dynamic PET/CT Images: Evaluation with Phantom and Patient Data
Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R.; Nelson, Linda D.; Small, Gary W.; Huang, Sung-Cheng
2014-01-01
Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers. PMID:25111700
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2007-01-01
SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.
Klén, Riku; Noponen, Tommi; Koikkalainen, Juha; Lötjönen, Jyrki; Thielemans, Kris; Hoppela, Erika; Sipilä, Hannu; Teräs, Mika; Knuuti, Juhani
2016-09-01
Dual gating is a method of dividing the data of a cardiac PET scan into smaller bins according to the respiratory motion and the ECG of the patient. It reduces the undesirable motion artefacts in images, but produces several images for interpretation and decreases the quality of single images. By using motion-correction techniques, the motion artefacts in the dual-gated images can be corrected and the images can be combined into a single motion-free image with good statistics. The aim of the present study is to develop and evaluate motion-correction methods for cardiac PET studies. We have developed and compared two different methods: computed tomography (CT)/PET-based and CT-only methods. The methods were implemented and tested with a cardiac phantom and three patient datasets. In both methods, anatomical information of CT images is used to create models for the cardiac motion. In the patient study, the CT-only method reduced motion (measured as the centre of mass of the myocardium) on average 43%, increased the contrast-to-noise ratio on average 6.0% and reduced the target size on average 10%. Slightly better figures (51, 6.9 and 28%) were obtained with the CT/PET-based method. Even better results were obtained in the phantom study for both the CT-only method (57, 68 and 43%) and the CT/PET-based method (61, 74 and 52%). We conclude that using anatomical information of CT for motion correction of cardiac PET images, both respiratory and pulsatile motions can be corrected with good accuracy.
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L.
2014-01-01
OBJECTIVE The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. MATERIALS AND METHODS Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. RESULTS The average interactive liver volume was 1553 ± 343 cm3, and the average automated liver volume was 1520 ± 378 cm3. The average manual volume was 1486 ± 343 cm3. Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). CONCLUSION Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient. PMID:21940543
Suzuki, Kenji; Epstein, Mark L; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L
2011-10-01
The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. The average interactive liver volume was 1553 ± 343 cm(3), and the average automated liver volume was 1520 ± 378 cm(3). The average manual volume was 1486 ± 343 cm(3). Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient.
NASA Astrophysics Data System (ADS)
Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.
2012-03-01
C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polan, D; Kamp, J; Lee, JY
Purpose: To perform validation and commissioning of a commercial deformable image registration (DIR) algorithm (Velocity, Varian Medical Systems) for numerous clinical sites using single and multi-modality images. Methods: In this retrospective study, the DIR algorithm was evaluated for 10 patients in each of the following body sites: head and neck (HN), prostate, liver, and gynecological (GYN). HN DIRs were evaluated from planning (p)CT to re-pCT and pCTs to daily CBCTs using dice similarity coefficients (DSC) of corresponding anatomical structures. Prostate DIRs were evaluated from pCT to CBCTs using DSC and target registration error (TRE) of implanted RF beacons within themore » prostate. Liver DIRs were evaluated from pMR to pCT using DSC and TRE of vessel bifurcations. GYN DIRs were evaluated between fractionated brachytherapy MRIs using DSC of corresponding anatomical structures. Results: Analysis to date has given average DSCs for HN pCT-to-(re)pCT DIR for the brainstem, cochleas, constrictors, spinal canal, cord, esophagus, larynx, parotids, and submandibular glands as 0.88, 0.65, 0.67, 0.91, 0.77, 0.69, 0.77, 0.87, and 0.71, respectively. Average DSCs for HN pCT-to-CBCT DIR for the constrictors, spinal canal, esophagus, larynx, parotids, and submandibular glands were 0.64, 0.90, 0.62, 0.82, 0.75, and 0.69, respectively. For prostate pCT-to-CBCT DIR the DSC for the bladder, femoral heads, prostate, and rectum were 0.71, 0.82, 0.69, and 0.61, respectively. Average TRE using implanted beacons was 3.35 mm. For liver pCT-to-pMR, the average liver DSC was 0.94 and TRE was 5.26 mm. For GYN MR-to-MR DIR the DSC for the bladder, sigmoid colon, GTV, and rectum were 0.79, 0.58, 0.67, and 0.76, respectively. Conclusion: The Velocity DIR algorithm has been evaluated over a number of anatomical sites. This work functions to document the uncertainties in the DIR in the commissioning process so that these can be accounted for in the development of downstream clinical processes. This work was supported in part by a co-development agreement with Varian Medical Systems.« less
NASA Astrophysics Data System (ADS)
Martinez, G.; Vanderlinden, K.; Ordóñez, R.; Muriel, J. L.
2009-04-01
Soil organic carbon (SOC) spatial characterization is necessary to evaluate under what circumstances soil acts as a source or sink of carbon dioxide. However, at the field or catchment scale it is hard to accurately characterize its spatial distribution since large numbers of soil samples are necessary. As an alternative, near-surface geophysical sensor-based information can improve the spatial estimation of soil properties at these scales. Electromagnetic induction (EMI) sensors provide non-invasive and non-destructive measurements of the soil apparent electrical conductivity (ECa), which depends under non-saline conditions on clay content, water content or SOC, among other properties that determine the electromagnetic behavior of the soil. This study deals with the possible use of ECa-derived maps to improve SOC spatial estimation by Simple Kriging with varying local means (SKlm). Field work was carried out in a vertisol in SW Spain. The field is part of a long-term tillage experiment set up in 1982 with three replicates of conventional tillage (CT) and Direct Drilling (DD) plots with unitary dimensions of 15x65m. Shallow and deep (up to 0.8m depth) apparent electrical conductivity (ECas and ECad, respectively) was measured using the EM38-DD EMI sensor. Soil samples were taken from the upper horizont and analyzed for their SOC content. Correlation coefficients of ECas and ECad with SOC were low (0.331 and 0.175) due to the small range of SOC values and possibly also to the different support of the ECa and SOC data. Especially the ECas values were higher in the DD plots. The normalized ECa difference (ΔECa), calculated as the difference between the normalized ECas and ECad values, distinguished clearly the CT and DD plots, with the DD plots showing positive ΔECa values and CT plots ΔECa negative values. The field was stratified using fuzzy k-means (FKM) classification of ΔECa (FKM1), and ECas and ECad (FKM2). The FKM1 map mainly showed the difference between CT and DD plots, while the FKM2 map showed both differences between CT and DD and topography-associated features. Using the FKM1 and FKM2 maps as secondary information accounted for 30% of the total SOC variability, whereas plot and management average SOC explained 44 and 41%, respectively. Cross validation of SKlm using FKM2 reduced the RMSE by 8% and increased the efficiency index almost 70% as compared to Ordinary Kriging. This work shows how ECa can improve the spatial characterization of SOC, despite its low correlation and the small size of the plots used in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riegel, Adam C. B.A.; Chang, Joe Y.; Vedam, Sastry S.
2009-02-01
Purpose: To determine whether cine computed tomography (CT) can serve as an alternative to four-dimensional (4D)-CT by providing tumor motion information and producing equivalent target volumes when used to contour in radiotherapy planning without a respiratory surrogate. Methods and Materials: Cine CT images from a commercial CT scanner were used to form maximum intensity projection and respiratory-averaged CT image sets. These image sets then were used together to define the targets for radiotherapy. Phantoms oscillating under irregular motion were used to assess the differences between contouring using cine CT and 4D-CT. We also retrospectively reviewed the image sets for 26more » patients (27 lesions) at our institution who had undergone stereotactic radiotherapy for Stage I non-small-cell lung cancer. The patients were included if the tumor motion was >1 cm. The lesions were first contoured using maximum intensity projection and respiratory-averaged CT image sets processed from cine CT and then with 4D-CT maximum intensity projection and 10-phase image sets. The mean ratios of the volume magnitude were compared with intraobserver variation, the mean centroid shifts were calculated, and the volume overlap was assessed with the normalized Dice similarity coefficient index. Results: The phantom studies demonstrated that cine CT captured a greater extent of irregular tumor motion than did 4D-CT, producing a larger tumor volume. The patient studies demonstrated that the gross tumor defined using cine CT imaging was similar to, or slightly larger than, that defined using 4D-CT. Conclusion: The results of our study have shown that cine CT is a promising alternative to 4D-CT for stereotactic radiotherapy planning.« less
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
Surface smoothing and template partitioning for cranial implant CAD
NASA Astrophysics Data System (ADS)
Min, Kyoung-june; Dean, David
2005-04-01
Employing patient-specific prefabricated implants can be an effective treatment for large cranial defects (i.e., > 25 cm2). We have previously demonstrated the use of Computer Aided Design (CAD) software that starts with the patient"s 3D head CT-scan. A template is accurately matched to the pre-detected skull defect margin. For unilateral cranial defects the template is derived from a left-to-right mirrored skull image. However, two problems arise: (1) slice edge artifacts generated during isosurface polygonalization are inherited by the final implant; and (2) partitioning (i.e., cookie-cutting) the implant surface from the mirrored skull image usually results in curvature discontinuities across the interface between the patient"s defect and the implant. To solve these problems, we introduce a novel space curve-to-surface partitioning algorithm following a ray-casting surface re-sampling and smoothing procedure. Specifically, the ray-cast re-sampling is followed by bilinear interpolation and low-pass filtering. The resulting surface has a highly regular grid-like topological structure of quadrilaterally arranged triangles. Then, we replace the regions to be partitioned with predefined sets of triangular elements thereby cutting the template surface to accurately fit the defect margin at high resolution and without surface curvature discontinuities. Comparisons of the CAD implants for five patients against the manually generated implant that the patient actually received show an average implant-patient gap of 0.45mm for the former and 2.96mm for the latter. Also, average maximum normalized curvature of interfacing surfaces was found to be smoother, 0.043, for the former than the latter, 0.097. This indicates that the CAD implants would provide a significantly better fit.
NASA Astrophysics Data System (ADS)
Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.
2012-03-01
Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
Divgi, Chaitanya R.; Uzzo, Robert G.; Gatsonis, Constantine; Bartz, Roman; Treutner, Silke; Yu, Jian Qin; Chen, David; Carrasquillo, Jorge A.; Larson, Steven; Bevan, Paul; Russo, Paul
2013-01-01
Purpose A clinical study to characterize renal masses with positron emission tomography/computed tomography (PET/CT) was undertaken. Patients and Methods This was an open-label multicenter study of iodine-124 (124I) -girentuximab PET/CT in patients with renal masses who were scheduled for resection. PET/CT and contrast-enhanced CT (CECT) of the abdomen were performed 2 to 6 days after intravenous 124I-girentuximab administration and before resection of the renal mass(es). Images were interpreted centrally by three blinded readers for each imaging modality. Tumor histology was determined by a blinded central pathologist. The primary end points—average sensitivity and specificity for clear cell renal cell carcinoma (ccRCC)—were compared between the two modalities. Agreement between and within readers was assessed. Results 124I-girentuximab was well tolerated. In all, 195 patients had complete data sets (histopathologic diagnosis and PET/CT and CECT results) available. The average sensitivity was 86.2% (95% CI, 75.3% to 97.1%) for PET/CT and 75.5% (95% CI, 62.6% to 88.4%) for CECT (P = .023). The average specificity was 85.9% (95% CI, 69.4% to 99.9%) for PET/CT and 46.8% (95% CI, 18.8% to 74.7%) for CECT (P = .005). Inter-reader agreement was high (κ range, 0.87 to 0.92 for PET/CT; 0.67 to 0.76 for CECT), as was intrareader agreement (range, 87% to 100% for PET/CT; 73.7% to 91.3% for CECT). Conclusion This study represents (to the best of our knowledge) the first clinical validation of a molecular imaging biomarker for malignancy. 124I-girentuximab PET/CT can accurately and noninvasively identify ccRCC, with potential utility for designing best management approaches for patients with renal masses. PMID:23213092
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Z; Greskovich, J; Xia, P
Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF wasmore » used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, David V.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas; Tucker, Susan L.
2014-11-15
Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation wasmore » used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78.8% (±3.9% SD) classification reproducibility in terms of OS, LRC, and FFDM, respectively. Conclusions: Pretreatment tumor texture may provide prognostic information beyond that obtained from CPFs. Models incorporating feature reproducibility achieved classification rates of ∼80%. External validation would be required to establish texture as a prognostic factor.« less
Jain, Sunil
2008-01-01
Our objective was to assess and validate low-dose computed tomography (CT) scanogram as a post-operative imaging modality to measure the mechanical axis after navigated total knee replacement. A prospective study was performed to compare intra-operative and post-operative mechanical axis after navigated total knee replacements. All consecutive patients who underwent navigated total knee replacement between May and December 2006 were included. The intra-operative final axis was recorded, and post-operatively a CT scanogram of lower limbs was performed. The mechanical axis was measured and compared against the intra-operative measurement. There were 15 patients ranging in age from 57 to 80 (average 70) years. The average final intra-operative axis was 0.56° varus (4° varus to 1.5° valgus) and post-operative CT scanogram axis was 0.52° varus (3.1° varus to 1.8° valgus). The average deviation from final axes to CT scanogram axes was 0.12° valgus with a correlation coefficient of 0.9. Our study suggests that CT scanogram is an imaging modality with reasonable accuracy for measuring mechanical axis despite significantly low radiation. It also confirms a high level of correlation between intra-operative and post-operative mechanical axis after navigated total knee replacement. PMID:18696064
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carranza, C; Lipnharski, I; Quails, N
Purpose: This retrospective study analyzes the exposure history of emergency department (ED) patients undergoing head and cervical spine trauma computed tomography (CT) studies. This study investigated dose levels received by trauma patients and addressed any potential concerns regarding radiation dose issues. Methods: Under proper IRB approval, a cohort of 300 trauma cases of head and cervical spine trauma CT scans received in the ED was studied. The radiological image viewing software of the hospital was used to view patient images and image data. The following parameters were extracted: the imaging history of patients, the reported dose metrics from the scannermore » including the volumetric CT Dose Index (CTDIvol) and Dose Length Product (DLP). A postmortem subject was scanned using the same scan techniques utilized in a standard clinical head and cervical spine trauma CT protocol with 120 kVp and 280 mAs. The CTDIvol was recorded for the subject and the organ doses were measured using optically stimulated luminescent (OSL) dosimeters. Typical organ doses to the brain, thyroid, lens, salivary glands, and skin, based on the cadaver studies, were then calculated and reported for the cohort. Results: The CTDIvol reported by the CT scanner was 25.5 mGy for the postmortem subject. The average CTDIvol from the patient cohort was 34.1 mGy. From these metrics, typical average organ doses in mGy were found to be: Brain (44.57), Thyroid (33.40), Lens (82.45), Salivary Glands (61.29), Skin (47.50). The imaging history of the cohort showed that on average trauma patients received 26.1 scans over a lifetime. Conclusion: The average number of scans received on average by trauma ED patients shows that radiation doses in trauma patients may be a concern. Available dose tracking software would be helpful to track doses in trauma ED patients, highlighting the importance of minimizing unnecessary scans and keeping doses ALARA.« less
Technical note: RabbitCT--an open platform for benchmarking 3D cone-beam reconstruction algorithms.
Rohkohl, C; Keck, B; Hofmann, H G; Hornegger, J
2009-09-01
Fast 3D cone beam reconstruction is mandatory for many clinical workflows. For that reason, researchers and industry work hard on hardware-optimized 3D reconstruction. Backprojection is a major component of many reconstruction algorithms that require a projection of each voxel onto the projection data, including data interpolation, before updating the voxel value. This step is the bottleneck of most reconstruction algorithms and the focus of optimization in recent publications. A crucial limitation, however, of these publications is that the presented results are not comparable to each other. This is mainly due to variations in data acquisitions, preprocessing, and chosen geometries and the lack of a common publicly available test dataset. The authors provide such a standardized dataset that allows for substantial comparison of hardware accelerated backprojection methods. They developed an open platform RabbitCT (www.rabbitCT.com) for worldwide comparison in backprojection performance and ranking on different architectures using a specific high resolution C-arm CT dataset of a rabbit. This includes a sophisticated benchmark interface, a prototype implementation in C++, and image quality measures. At the time of writing, six backprojection implementations are already listed on the website. Optimizations include multithreading using Intel threading building blocks and OpenMP, vectorization using SSE, and computation on the GPU using CUDA 2.0. There is a need for objectively comparing backprojection implementations for reconstruction algorithms. RabbitCT aims to provide a solution to this problem by offering an open platform with fair chances for all participants. The authors are looking forward to a growing community and await feedback regarding future evaluations of novel software- and hardware-based acceleration schemes.
Admiraal, Marjan A; Schuring, Danny; Hurkmans, Coen W
2008-01-01
The purpose of this study was to determine the 4D accumulated dose delivered to the CTV in stereotactic radiotherapy of lung tumours, for treatments planned on an average CT using an ITV derived from the Maximum Intensity Projection (MIP) CT. For 10 stage I lung cancer patients, treatment plans were generated based on 4D-CT images. From the 4D-CT scan, 10 time-sorted breathing phases were derived, along with the average CT and the MIP. The ITV with a margin of 0mm was used as a PTV to study a worst case scenario in which the differences between 3D planning and 4D dose accumulation will be largest. Dose calculations were performed on the average CT. Dose prescription was 60Gy to 95% of the PTV, and at least 54Gy should be received by 99% of the PTV. Plans were generated using the inverse planning module of the Pinnacle(3) treatment planning system. The plans consisted of nine coplanar beams with two segments each. After optimisation, the treatment plan was transferred to all breathing phases and the delivered dose per phase was calculated using an elastic body spline model available in our research version of Pinnacle (8.1r). Then, the cumulative dose to the CTV over all breathing phases was calculated and compared to the dose distribution of the original treatment plan. Although location, tumour size and breathing-induced tumour movement varied widely between patients, the PTV planning criteria could always be achieved without compromising organs at risk criteria. After 4D dose calculations, only very small differences between the initial planned PTV coverage and resulting CTV coverage were observed. For all patients, the dose delivered to 99% of the CTV exceeded 54Gy. For nine out of 10 patients also the criterion was met that the volume of the CTV receiving at least the prescribed dose was more than 95%. When the target dose is prescribed to the ITV (PTV=ITV) and dose calculations are performed on the average CT, the cumulative CTV dose compares well to the planned dose to the ITV. Thus, the concept of treatment plan optimisation and evaluation based on the average CT and the ITV is a valid approach in stereotactic lung treatment. Even with a zero ITV to PTV margin, no significantly different dose coverage of the CTV arises from the breathing motion induced dose variation over time.
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto
2013-04-01
Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.
Average M shell fluorescence yields for elements with 70≤Z≤92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahoul, A., E-mail: ka-abdelhalim@yahoo.fr; LPMRN laboratory, Department of Materials Science, Faculty of Sciences and Technology, Mohamed El Bachir El Ibrahimi University, Bordj-Bou-Arreridj 34030; Deghfel, B.
2015-03-30
The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω{sup ¯}{sub M}) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement wasmore » typically obtained between our result and other works.« less
[Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].
Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong
2018-02-01
In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.
Lee, J Y; Shank, B; Bonfiglio, P; Reid, A
1984-10-01
Sequential changes in lung density measured by CT are potentially sensitive and convenient monitors of lung abnormalities following total body irradiation (TBI). Methods have been developed to compare pre- and post-TBI CT of lung. The average local features of a cross-sectional lung slice are extracted from three peripheral regions of interest in the anterior, posterior, and lateral portions of the CT image. Also, density profiles across a specific region may be obtained. These may be compared first for verification of patient position and breathing status and then for changes between pre- and post-TBI. These may also be compared with radiation dose profiles through the lung. A preliminary study on 21 leukemia patients undergoing total body irradiation indicates the following: (a) Density gradients of patients' lungs in the antero-posterior direction show a marked heterogeneity before and after transplantation compared with normal lungs. The patients with departures from normal density gradients pre-TBI correlate with later pulmonary complications. (b) Measurements of average peripheral lung densities have demonstrated that the average lung density in the younger age group is substantially higher: pre-TBI, the average CT number (1,000 scale) is -638 +/- 39 Hounsfield unit (HU) for 0-10 years old and -739 +/- 53 HU for 21-40 years old. (c) Density profiles showed no post-TBI regional changes in lung density corresponding to the dose profile across the lung, so no differentiation of a radiation-specific effect has yet been possible. Computed tomographic density profiles in the antero-posterior direction are successfully used to verify positioning of the CT slice and the breathing level of the lung.
Comparison of volumetric breast density estimations from mammography and thorax CT
NASA Astrophysics Data System (ADS)
Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.
2014-08-01
Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.
Image quality and stability of image-guided radiotherapy (IGRT) devices: A comparative study.
Stock, Markus; Pasler, Marlies; Birkfellner, Wolfgang; Homolka, Peter; Poetter, Richard; Georg, Dietmar
2009-10-01
Our aim was to implement standards for quality assurance of IGRT devices used in our department and to compare their performances with that of a CT simulator. We investigated image quality parameters for three devices over a period of 16months. A multislice CT was used as a benchmark and results related to noise, spatial resolution, low contrast visibility (LCV) and uniformity were compared with a cone beam CT (CBCT) at a linac and simulator. All devices performed well in terms of LCV and, in fact, exceeded vendor specifications. MTF was comparable between CT and linac CBCT. Integral nonuniformity was, on average, 0.002 for the CT and 0.006 for the linac CBCT. Uniformity, LCV and MTF varied depending on the protocols used for the linac CBCT. Contrast-to-noise ratio was an average of 51% higher for the CT than for the linac and simulator CBCT. No significant time trend was observed and tolerance limits were implemented. Reasonable differences in image quality between CT and CBCT were observed. Further research and development are necessary to increase image quality of commercially available CBCT devices in order for them to serve the needs for adaptive and/or online planning.
Giannitto, Caterina; Campoleoni, Mauro; Maccagnoni, Sara; Angileri, Alessio Salvatore; Grimaldi, Maria Carmela; Giannitto, Nino; De Piano, Francesca; Ancona, Eleonora; Biondetti, Pietro Raimondo; Esposito, Andrea Alessandro
2018-03-01
To determine the frequency of unindicated CT phases and the resultant excess of absorbed radiation doses to the uterus and ovaries in women of reproductive age who have undergone CT for non-traumatic abdomino-pelvic emergencies. We reviewed all abdomino-pelvic CT examinations in women of reproductive age (40 years or less), between 1 June 2012 and 31 January 2015. We evaluated the appropriateness of each CT phase on the basis of clinical indications, according to ACR appropriateness criteria and evidence-based data from the literature. The doses to uterus and ovaries for each phase were calculated with the CTEXPO software, taking into consideration the size-specific dose estimate (SSDE) after measuring the size of every single patient. The final cohort was composed of 76 female patients with an average age of 30 (from 19 to 40 years). In total, 197 CT phases were performed with an average of 2.6 phases per patient. Out of these, 93 (47%) were unindicated with an average of 1.2 inappropriate phases per patient. Unindicated scans were most frequent for appendicitis and unlocalized abdominal pain. The excesses of mean radiation doses to the uterus and ovaries due to unindicated phases were, respectively, of 38 and 33 mSv per patient. In our experience, unindicated additional CT phases were numerous with a significant excess radiation dose without an associated clinical benefit. This excess of radiation could have been avoided by widespread adoption of the ACR appropriateness criteria and evidence-based data from the literature.
Radiography versus computed tomography for displacement assessment in calcaneal fractures.
Ogawa, Brent K; Charlton, Timothy P; Thordarson, David B
2009-10-01
Coronal computed tomography (CT) scans are commonly used in fracture classification systems for calcaneus fractures. However, they may not accurately reflect the amount of fracture displacement. The purpose of this paper was to determine whether lateral radiographs provide superior assessment of the displacement of the posterior facet compared to coronal CT scans. Lateral radiographs of calcaneus fractures were compared with CT coronal images of the posterior facet in 30 displaced intra-articular calcaneus fractures. The average patient age was 39 years old. Using a Picture Archiving and Communication System (PACS), measurements were obtained to quantify the amount of displacement on the lateral radiograph and compared with the amount of depression on corresponding coronal CT scans. On lateral radiographs, the angle of the depressed portion of the posterior facet relative to the undersurface of the calcaneus averaged 28.2 degrees; Bohler's angle averaged 12.7 degrees. These numbers were poorly correlated (r = 0.25). In corresponding CT images from posterior to anterior, the difference in the amount of displacement of the lateral portion of the displaced articular facet versus the nondisplaced medial, constant fragment, was minimal and consistently underestimated the amount of displacement. Underestimation of the amount of depression and rotation of the posterior facet fragment was seen on the coronal CT scan. We attribute this finding to the combined rotation and depression of the posterior facet which may not be measured accurately with the typical semicoronal CT orientation. While sagittal reconstructed images would show this depression better, if they are unavailable we recommend using lateral radiographs to better gauge the amount of fracture displacement.
Generation of Fullspan Leading-Edge 3D Ice Shapes for Swept-Wing Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Camello, Stephanie C.; Lee, Sam; Lum, Christopher; Bragg, Michael B.
2016-01-01
The deleterious effect of ice accretion on aircraft is often assessed through dry-air flight and wind tunnel testing with artificial ice shapes. This paper describes a method to create fullspan swept-wing artificial ice shapes from partial span ice segments acquired in the NASA Glenn Icing Reserch Tunnel for aerodynamic wind-tunnel testing. Full-scale ice accretion segments were laser scanned from the Inboard, Midspan, and Outboard wing station models of the 65% scale Common Research Model (CRM65) aircraft configuration. These were interpolated and extrapolated using a weighted averaging method to generate fullspan ice shapes from the root to the tip of the CRM65 wing. The results showed that this interpolation method was able to preserve many of the highly three dimensional features typically found on swept-wing ice accretions. The interpolated fullspan ice shapes were then scaled to fit the leading edge of a 8.9% scale version of the CRM65 wing for aerodynamic wind-tunnel testing. Reduced fidelity versions of the fullspan ice shapes were also created where most of the local three-dimensional features were removed. The fullspan artificial ice shapes and the reduced fidelity versions were manufactured using stereolithography.
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-01-01
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-04-11
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).
Moho map of South America from receiver functions and surface waves
NASA Astrophysics Data System (ADS)
Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei
2010-11-01
We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, D; Chen, X; Li, X
2016-06-15
Purpose: To investigate the feasibility of assessing treatment response using CTs during delivery of radiation therapy (RT) for esophageal cancer. Methods: Daily CTs acquired using a CT-on-Rails during the routine CT-guided RT for 20 patients with stage II to IV esophageal cancers were analyzed. All patients were treated with combined chemotherapy and IMRT of 45–50 Gy in 25 fractions, and were followed up for two years. Contours of GTV, spinal cord, and non-specified tissue (NST) irradiated with low dose were generated on each daily CT. A series of CT-texture metrics including Hounsfield Unit (HU) histogram, mean HU, standard derivation (STD),more » entropy, and energy were obtained in these contours on each daily CT. The changes of these metrics and GTV volume during RT delivery were calculated and correlated with treatment outcome. Results: Changes in CT texture (e.g., HU histogram) in GTV and spinal cord (but not in NST) were observed during RT delivery and were consistently increased with radiation dose. For the 20 cases studied, the mean HU in GTV was reduced on average by 4.0HU from the first to the last fractions, while 8 patients (responders) had larger reductions in GTV mean HU (average 7.8 HU) with an average GTV reduction of 51% and had increased consistently in GTV STD and entropy with radiation dose. The rest of 12 patients (non-responders) had lower reductions in GTV mean HU (average 1.5HU) and almost no change in STD and entropy. For the 8 responders, 2 experienced complete response, 7 (88%) survived and 1 died. In contrast, for the 12 non-responders, 4 (33%) survived and 8 died. Conclusion: Radiation can induce changes in CT texture in tumor (e.g., mean HU) during the delivery of RT for esophageal cancer. If validated with more data, such changes may be used for early prediction of RT response for esophageal cancer.« less
Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing
2012-01-01
In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.
3d expansions of 5d instanton partition functions
NASA Astrophysics Data System (ADS)
Nieri, Fabrizio; Pan, Yiwen; Zabzine, Maxim
2018-04-01
We propose a set of novel expansions of Nekrasov's instanton partition functions. Focusing on 5d supersymmetric pure Yang-Mills theory with unitary gauge group on C_{q,{t}^{-1}}^2× S^1 , we show that the instanton partition function admits expansions in terms of partition functions of unitary gauge theories living on the 3d subspaces C_q× S^1 , C_{t^{-1}}× S^1 and their intersection along S^1 . These new expansions are natural from the BPS/CFT viewpoint, as they can be matched with W q,t correlators involving an arbitrary number of screening charges of two kinds. Our constructions generalize and interpolate existing results in the literature.
Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callahan, Jason, E-mail: jason.callahan@petermac.org; Kron, Tomas; Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne
2013-07-15
Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom whilemore » moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently underestimates ITV when compared with 4D PET/CT for a lesion affected by respiration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Z; Koyfman, S; Xia, P
2015-06-15
Purpose: To evaluate geometric and dosimetric uncertainties of CT-CBCT deformable image registration (DIR) algorithms using digital phantoms generated from real patients. Methods: We selected ten H&N cancer patients with adaptive IMRT. For each patient, a planning CT (CT1), a replanning CT (CT2), and a pretreatment CBCT (CBCT1) were used as the basis for digital phantom creation. Manually adjusted meshes were created for selected ROIs (e.g. PTVs, brainstem, spinal cord, mandible, and parotids) on CT1 and CT2. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was applied tomore » CBCT1 to create a simulated mid-treatment CBCT (CBCT2). The CT-CBCT digital phantom consisted of CT1 and CBCT2, which were linked by the reference DVF. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten digital phantoms. The images, ROIs, and volumetric doses were mapped from CT1 to CBCT2 using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.83 to 0.94 for Demons, from 0.82 to 0.95 for B-Spline, and from 0.67 to 0.89 for intensity-based DIR. The average Hausdorff distances for selected ROIs were from 2.4 to 6.2 mm for Demons, from 1.8 to 5.9 mm for B-Spline, and from 2.8 to 11.2 mm for intensity-based DIR. The average absolute dose errors for selected ROIs were from 0.7 to 2.1 Gy for Demons, from 0.7 to 2.9 Gy for B- Spline, and from 1.3 to 4.5 Gy for intensity-based DIR. Conclusion: Using clinically realistic CT-CBCT digital phantoms, Demons and B-Spline were shown to have similar geometric and dosimetric uncertainties while intensity-based DIR had the worst uncertainties. CT-CBCT DIR has the potential to provide accurate CBCT-based dose verification for H&N adaptive radiotherapy. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; S Koyfman: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less
Interactive lung segmentation in abnormal human and animal chest CT scans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kockelkorn, Thessa T. J. P., E-mail: thessa@isi.uu.nl; Viergever, Max A.; Schaefer-Prokop, Cornelia M.
2014-08-15
Purpose: Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors’ aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans. Methods: In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling resultsmore » can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities. Results: On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933. Conclusions: The authors have developed two fast and reliable methods for interactive lung segmentation in challenging chest CT images. Both systems do not require prior knowledge of the scans under consideration and work on a variety of scans.« less
Impact of Anatomical Location on Value of CT-PET Co-Registration for Delineation of Lung Tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitton, Isabelle; Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam; Steenbakkers, Roel J.H.M.
2008-04-01
Purpose: To derive guidelines for the need to use positron emission tomography (PET) for delineation of the primary tumor (PT) according to its anatomical location in the lung. Methods and Materials: In 22 patients with non-small-cell lung cancer, thoracic X-ray computed tomography (CT) and PET were performed. Eleven radiation oncologists delineated the PT on the CT and on the CT-PET registered scans. The PTs were classified into two groups. In Group I patients, the PT was surrounded by lung or visceral pleura, without venous invasion, without extension to chest wall or the mediastinum over more than one quarter of itsmore » surface. In Group II patients, the PT invaded the hilar region, heart, great vessels, pericardium, mediastinum over more than one quarter of its surface and/or associated with atelectasis. A comparison of interobserver variability for each group was performed and expressed as a local standard deviation. Results: The comparison of delineations showed a good reproducibility for Group I, with an average SD of 0.4 cm on CT and an average SD of 0.3 cm on CT-PET (p = 0.1628). There was also a significant improvement with CT-PET for Group II, with an average SD of 1.3 cm on CT and SD of 0.4 cm on CT-PET (p = 0.0003). The improvement was mainly located at the atelectasis/tumor interface. At the tumor/lung and tumor/hilum interfaces, the observer variation was similar with both modalities. Conclusions: Using PET for PT delineation is mandatory to decrease interobserver variability in the hilar region, heart, great vessels, pericardium, mediastinum, and/or the region associated with atelectasis; however it is not essential for delineation of PT surrounded by lung or visceral pleura, without venous invasion or extension to the chest wall.« less
Madrzak, Dorota; Mikołajczak, Renata; Kamiński, Grzegorz
2016-01-01
The aim of this study was the assessment of utility of somatostatin receptor scintigraphy (SRS) by SPECT imaging using 99mTc-EDDA/HYNIC-Tyr3-octreotide (99mTc-EDDA/HYNIC-TOC) in patients with neuroendocrine neoplasm (NEN) or suspected NEN, referred to Nuclear Medicine Dept. of Voivodship Specialty Center in Rzeszow. The selected group of patients was referred also to 68Ga PET/CT. The posed question was the ratio of patients for whom PET/CT with 68Ga would change their management. The distribution of somatostatin receptors was imaged using 99mTc-EDDA/HYNIC-TOC in 61 planar and SPECT studies between 13/05/2010 and 04/02/2013 in Nuclear Medicine Dept. of Voivodship Specialty Center in Rzeszow. The patient age was within a range of 17-80, with the average age of 57.6. The average age of women (65% of patients over-all) was 55.6 and the average age of men (35% of patients overall) was 61.4. In 46 participants (75% of the study group), that underwent SRS, NEN was documented using pathology tests. Selected patients were referred to PET/CT with 68Ga labeled somatostatin analogs, DOTATATE or DOTANOC. This study group consisted of 14 female and 10 male participants with age range of 35-77 and average age of 55.5 years. Patients were classified into 3 groups, as follows: detection - referral due to clinical symptoms and/or biochemical markers (CgA-Chromogranin A, IAA-indoleacetic acid) with the aim of primary diagnosis, staging - referral with the aim of assessment of tumor spread, and follow-up - assessment of the therapy. Out of 61 patients, 24 underwent both 99mTc-EDDA/HYNIC-Tyr3-octreotide SPECT and 68Ga PET/CT. The result of PET/CT was used as a basis for further evaluation. Therefore, the patients were divided into groups; true positive TP (confirmed presence of tissue somatostatin receptors with 68Ga PET/CT) and TN (68Ga PET/CT did not detect any changes and the results were comparable and had the same influence on treatment protocol). In case of SPECT, the results were assigned as follows: TP, TN (in cases where the results were confirmed by 68Ga PET/CT), FP (patient's scintigraphy demonstrated focal change by SPECT but not PET/CT) and FN (99mTc-EDDA/HYNIC-Tyr3-octreotide SPECT failed to demonstrate any abnormalities; however, the treatment protocol was changed after PET/CT). The accuracy of SPECT diagnosis was found to be as high as 91.6%. Only in 8.4% of patients the additional PET/CT with 68Ga-labeled somatostatin analog changed the treatment protocol.
Why GPS makes distances bigger than they are
Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried
2016-01-01
ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications, the covariance function depends primarily on...case of missing values at the compiled time series, the gaps were filled by weighted interpolation. The weights depend on the number of the...averaging, in order to create the continuous time series, filters out the dependency on the instantaneous meteorological and oceanographic conditions
Site index charts for Douglas-fir in the Pacific Northwest.
Grover A. Choate; Floyd A. Johnson
1958-01-01
Charts in this report can be used to estimate site index for Douglas-fir from stand age and from average total height of dominant and codominant trees. Table 1 and figure 2 in USDA Technical Bulletin 201 have been used for this purpose in the past. However, the table requires time-consuming interpolation and the figure gives only rough approximations.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-02-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Dynamic Development of Regional Cortical Thickness and Surface Area in Early Childhood.
Lyall, Amanda E; Shi, Feng; Geng, Xiujuan; Woolson, Sandra; Li, Gang; Wang, Li; Hamer, Robert M; Shen, Dinggang; Gilmore, John H
2015-08-01
Cortical thickness (CT) and surface area (SA) are altered in many neuropsychiatric disorders and are correlated with cognitive functioning. Little is known about how these components of cortical gray matter develop in the first years of life. We studied the longitudinal development of regional CT and SA expansion in healthy infants from birth to 2 years. CT and SA have distinct and heterogeneous patterns of development that are exceptionally dynamic; overall CT increases by an average of 36.1%, while cortical SA increases 114.6%. By age 2, CT is on average 97% of adult values, compared with SA, which is 69%. This suggests that early identification, prevention, and intervention strategies for neuropsychiatric illness need to be targeted to this period of rapid postnatal brain development, and that SA expansion is the principal driving factor in cortical volume after 2 years of age. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.
Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi
2014-01-01
The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.
Cros, Maria; Geleijns, Jacob; Joemai, Raoul M S; Salvadó, Marçal
2016-01-01
The purpose of this study was to estimate the patient dose from perfusion CT examinations of the brain, lung tumors, and the liver on a cone-beam 320-MDCT scanner using a Monte Carlo simulation and the recommendations of the International Commission on Radiological Protection (ICRP). A Monte Carlo simulation based on the Electron Gamma Shower Version 4 package code was used to calculate organ doses and the effective dose in the reference computational phantoms for an adult man and adult woman as published by the ICRP. Three perfusion CT acquisition protocols--brain, lung tumor, and liver perfusion--were evaluated. Additionally, dose assessments were performed for the skin and for the eye lens. Conversion factors were obtained to estimate effective doses and organ doses from the volume CT dose index and dose-length product. The sex-averaged effective doses were approximately 4 mSv for perfusion CT of the brain and were between 23 and 26 mSv for the perfusion CT body protocols. The eye lens dose from the brain perfusion CT examination was approximately 153 mGy. The sex-averaged peak entrance skin dose (ESD) was 255 mGy for the brain perfusion CT studies, 157 mGy for the lung tumor perfusion CT studies, and 172 mGy for the liver perfusion CT studies. The perfusion CT protocols for imaging the brain, lung tumors, and the liver performed on a 320-MDCT scanner yielded patient doses that are safely below the threshold doses for deterministic effects. The eye lens dose, peak ESD, and effective doses can be estimated for other clinical perfusion CT examinations from the conversion factors that were derived in this study.
NASA Astrophysics Data System (ADS)
Manzke, R.; Zagorchev, L.; d'Avila, A.; Thiagalingam, A.; Reddy, V. Y.; Chan, R. C.
2007-03-01
Catheter-based ablation in the left atrium and pulmonary veins (LAPV) for treatment of atrial fibrillation in cardiac electrophysiology (EP) are complex and require knowledge of heart chamber anatomy. Electroanatomical mapping (EAM) is typically used to define cardiac structures by combining electromagnetic spatial catheter localization with surface models which interpolate the anatomy between EAM point locations in 3D. Recently, the incorporation of pre-operative volumetric CT or MR data sets has allowed for more detailed maps of LAPV anatomy to be used intra-operatively. Preoperative data sets are however a rough guide since they can be acquired several days to weeks prior to EP intervention. Due to positional and physiological changes, the intra-operative cardiac anatomy can be different from that depicted in the pre-operative data. We present an application of contrast-enhanced rotational X-ray imaging for CT-like reconstruction of 3D LAPV anatomy during the intervention itself. Depending on the heart size a single or two selective contrastenhanced rotational acquisitions are performed and CT-like volumes are reconstructed with 3D filtered back projection. In case of dual injection, the two volumes depicting the left and right portions of the LAPV are registered and fused. The data sets are visualized and segmented intra-procedurally to provide anatomical data and surface models for intervention guidance. Our results from animal and human experiments indicate that the anatomical information from intra-operative CT-like reconstructions compares favorably with preacquired imaging data and can be of sufficient quality for intra-operative guidance.
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
Prell, D; Kalender, W A; Kyriakou, Y
2010-12-01
The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.
WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, J; Zhu, L
Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Ghotbi, Nader; Iwanaga, Masako; Ohtsuru, Akira; Ogawa, Yoji; Yamashita, Shunichi
2007-01-01
The use of Positron Emission Tomography (PET) or PET/CT for voluntary cancer screening of asymptomatic individuals is becoming common in Japan, though the utility of such screening is still controversial. This study estimated the general test validity and effective radiation dose for PET/CT cancer screening of healthy Japanese people by evaluating four standard indices (sensitivity, specificity, positive/negative predictive values), and predictive values with including prevalence for published literature and simulation-based Japanese data. CT and FDG-related dosage data were gathered from the literature and then extrapolated to the scan parameters at a model PET center. We estimated that the positive predictive value was only 3.3% in the use of PET/CT for voluntary cancer screening of asymptomatic Japanese individuals aged 50-59 years old, whose average cancer prevalence was 0.5%. The total effective radiation dose of a single whole-body PET/CT scan was estimated to be 6.34 to 9.48 mSv for the average Japanese individual, at 60 kg body weight. With PET/CT cancer screening in Japan, many healthy volunteers screened as false positive are exposed to at least 6.34 mSv without getting any real benefit. More evaluation concerning the justification of applying PET/CT for healthy people is necessary.
Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J
2011-09-21
Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.
NASA Astrophysics Data System (ADS)
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.
Automatic co-segmentation of lung tumor based on random forest in PET-CT images
NASA Astrophysics Data System (ADS)
Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian
2016-03-01
In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.
Image quality and stability of image-guided radiotherapy (IGRT) devices: A comparative study
Stock, Markus; Pasler, Marlies; Birkfellner, Wolfgang; Homolka, Peter; Poetter, Richard; Georg, Dietmar
2010-01-01
Introduction Our aim was to implement standards for quality assurance of IGRT devices used in our department and to compare their performances with that of a CT simulator. Materials and methods We investigated image quality parameters for three devices over a period of 16 months. A multislice CT was used as a benchmark and results related to noise, spatial resolution, low contrast visibility (LCV) and uniformity were compared with a cone beam CT (CBCT) at a linac and simulator. Results All devices performed well in terms of LCV and, in fact, exceeded vendor specifications. MTF was comparable between CT and linac CBCT. Integral nonuniformity was, on average, 0.002 for the CT and 0.006 for the linac CBCT. Uniformity, LCV and MTF varied depending on the protocols used for the linac CBCT. Contrast-to-noise ratio was an average of 51% higher for the CT than for the linac and simulator CBCT. No significant time trend was observed and tolerance limits were implemented. Discussion Reasonable differences in image quality between CT and CBCT were observed. Further research and development are necessary to increase image quality of commercially available CBCT devices in order for them to serve the needs for adaptive and/or online planning. PMID:19695725
Improvement of cardiac CT reconstruction using local motion vector fields.
Schirra, Carsten Oliver; Bontus, Claas; van Stevendaal, Udo; Dössel, Olaf; Grass, Michael
2009-03-01
The motion of the heart is a major challenge for cardiac imaging using CT. A novel approach to decrease motion blur and to improve the signal to noise ratio is motion compensated reconstruction which takes motion vector fields into account in order to correct motion. The presented work deals with the determination of local motion vector fields from high contrast objects and their utilization within motion compensated filtered back projection reconstruction. Image registration is applied during the quiescent cardiac phases. Temporal interpolation in parameter space is used in order to estimate motion during strong motion phases. The resulting motion vector fields are during image reconstruction. The method is assessed using a software phantom and several clinical cases for calcium scoring. As a criterion for reconstruction quality, calcium volume scores were derived from both, gated cardiac reconstruction and motion compensated reconstruction throughout the cardiac phases using low pitch helical cone beam CT acquisitions. The presented technique is a robust method to determine and utilize local motion vector fields. Motion compensated reconstruction using the derived motion vector fields leads to superior image quality compared to gated reconstruction. As a result, the gating window can be enlarged significantly, resulting in increased SNR, while reliable Hounsfield units are achieved due to the reduced level of motion artefacts. The enlargement of the gating window can be translated into reduced dose requirements.
Spatiotemporal Interpolation Methods for Solar Event Trajectories
NASA Astrophysics Data System (ADS)
Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe
2018-05-01
This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.
A bivariate rational interpolation with a bi-quadratic denominator
NASA Astrophysics Data System (ADS)
Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu
2006-10-01
In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.
Improved quantitation and reproducibility in multi-PET/CT lung studies by combining CT information.
Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Endozo, Raymond; Maher, Toby M; Groves, Ashley M; Hutton, Brian F; Thielemans, Kris
2018-06-05
Matched attenuation maps are vital for obtaining accurate and reproducible kinetic and static parameter estimates from PET data. With increased interest in PET/CT imaging of diffuse lung diseases for assessing disease progression and treatment effectiveness, understanding the extent of the effect of respiratory motion and establishing methods for correction are becoming more important. In a previous study, we have shown that using the wrong attenuation map leads to large errors due to density mismatches in the lung, especially in dynamic PET scans. Here, we extend this work to the case where the study is sub-divided into several scans, e.g. for patient comfort, each with its own CT (cine-CT and 'snap shot' CT). A method to combine multi-CT information into a combined-CT has then been developed, which averages the CT information from each study section to produce composite CT images with the lung density more representative of that in the PET data. This combined-CT was applied to nine patients with idiopathic pulmonary fibrosis, imaged with dynamic 18 F-FDG PET/CT to determine the improvement in the precision of the parameter estimates. Using XCAT simulations, errors in the influx rate constant were found to be as high as 60% in multi-PET/CT studies. Analysis of patient data identified displacements between study sections in the time activity curves, which led to an average standard error in the estimates of the influx rate constant of 53% with conventional methods. This reduced to within 5% after use of combined-CTs for attenuation correction of the study sections. Use of combined-CTs to reconstruct the sections of a multi-PET/CT study, as opposed to using the individually acquired CTs at each study stage, produces more precise parameter estimates and may improve discrimination between diseased and normal lung.
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Mapping wildfire effects on Ca2+ and Mg2+ released from ash. A microplot analisis.
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Úbeda, Xavier; Martin, Deborah
2010-05-01
Wildland fires have important implications in ecosystems dynamic. Their effects depends on many biophysical components, mainly burned specie, ecosystem affected, amount and spatial distribution of the fuel, relative humidity, slope, aspect and time of residence. These parameters are heterogenic across the landscape, producing a complex mosaic of severities. Wildland fires have a heterogenic impact on ecosystems due their diverse biophysical features. It is widely known that fire impacts can change rapidly even in short distances, producing at microplot scale highly spatial variation. Also after a fire, the most visible thing is ash and his physical and chemical properties are of main importance because here reside the majority of the available nutrients available to the plants. Considering this idea, is of major importance, study their characteristics in order to observe the type and amount of elements available to plants. This study is focused on the study of the spatial variability of two nutrients essential to plant growth, Ca2+ and Mg2+, released from ash after a wildfire at microplot scale. The impacts of fire are highly variable even small distances. This creates many problems at the hour of map the effects of fire in the release of the studied elements. Hence is of major priority identify the less biased interpolation method in order to predict with great accuracy the variable in study. The aim of this study is map the effects of wildfire on the referred elements released from ash at microplot scale, testing several interpolation methods. 16 interpolation techniques were tested, Inverse Distance to a Weight (IDW), with the with the weights of 1,2, 3, 4 and 5, Local Polynomial, with the power of 1 (LP1) and 2 (LP2), Polynomial Regression (PR), Radial Basis Functions, especially, Spline With Tension (SPT), Completely Regularized Spline (CRS), Multiquadratic (MTQ), Inverse Multiquadratic (MTQ), and Thin Plate Spline (TPS). Also geostatistical methods were tested from Kriging family, mainly Ordinary Kriging (OK), Simple Kriging (SK) and Universal Kriging (UK). Interpolation techniques were assessed throughout the Mean Error (ME) and Root Mean Square (RMSE), obtained from the cross validation procedure calculated in all methods. The fire occurred in Portugal, near an urban area and inside the affected area we designed a grid with the dimensions of 9 x 27 m and we collected 40 samples. Before modelling data, we tested their normality with the Shapiro Wilk test. Since the distributions of Ca2+ and Mg2+ did not respect the gaussian distribution we transformed data logarithmically (Ln). With this transformation, data respect the normality and spatial distribution was modelled with the transformed data. On average in the entire plot the ash slurries contained 4371.01 mg/l of Ca2+, however with a higher coefficient of variation (CV%) of 54.05%. From all the tested methods LP1 was the less biased and hence the most accurate to interpolate this element. The most biased was LP2. In relation to Mg2+, considering the entire plot, the ash released in solution on average 1196.01 mg/l, with a CV% of 52.36%, similar to the identified in Ca2+. The best interpolator in this case was SK and the most biased was LP1 and TPS. Comparing all methods in both elements, the quality of the interpolations was higher in Ca2+. These results allowed us to conclude that to achieve the best prediction it is necessary test a wide range of interpolation methods. The best accuracy will permit us to understand with more precision where the studied elements are more available and accessible to plant growth and ecosystem recovers. This spatial pattern of both nutrients is related with ash pH and burned severity evaluated from ash colour and CaCO3 content. These aspects will be also discussed in the work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, M; Rosica, D; Agarwal, V
Purpose: Two separate low-dose CT scans are usually performed for attenuation correction of rest and stress N-13 ammonia PET/CT myocardial perfusion imaging (PET/CT). We utilize an automatic exposure control (AEC) technique to reduce CT radiation dose while maintaining perfusion image quality. Our goal is to assess the reproducibility of displayed CT dose index (CTDI) on same-day repeat CT scans (CT1 and CT2). Methods: Retrospectively, we reviewed CT images of PET/CT studies performed on the same day. Low-dose CT utilized AEC technique based on tube current modulation called Smart-mA. The scan parameters were 64 × 0.625mm collimation, 5mm slice thickness, 0.984more » pitch, 1-sec rotation time, 120 kVp, and noise index 50 with a range of 10–200 mA. The scan length matched with PET field of view (FOV) with the heart near the middle of axial FOV. We identified the reference slice number (RS) for an anatomical landmark (carina) and used it to estimate axial shift between two CTs. For patient size, we measured an effective diameter on the reference slice. The effect of patient positioning to CTDI was evaluated using the table height. We calculated the absolute percent difference of the CTDI (%diff) for estimation of the reproducibility. Results: The study included 168 adults with an average body-mass index of 31.72 ± 9.10 (kg/m{sup 2}) and effective diameter was 32.72 ± 4.60 cm. The average CTDI was 1.95 ± 1.40 mGy for CT1 and 1.97 ± 1.42mGy for CT2. The mean %diff was 7.8 ± 6.8%. Linear regression analysis showed a significant correlation between the table height and %diff CTDI. (r=0.82, p<0.001) Conclusion: We have shown for the first time in human subjects, using two same-day CT images, that the AEC technique in low-dose CT is reproducible within 10% and significantly depends on the patient centering.« less
Aw-Zoretic, J; Seth, D; Katzman, G; Sammet, S
2014-10-01
The purpose of this review is to determine the averaged effective dose and lifetime attributable risk factor from multiple head computed tomography (CT) dose data on children with ventriculoperitoneal shunts (VPS). A total of 422 paediatric head CT exams were found between October 2008 and January 2011 and retrospectively reviewed. The CT dose data was weighted with the latest IRCP 103 conversion factor to obtain the effective dose per study and the averaged effective dose was calculated. Estimates of the lifetime attributable risk were also calculated from the averaged effective dose using a conversion factor from the latest BEIR VII report. Our study found the highest effective doses in neonates and the lowest effective doses were observed in the 10-18 years age group. We estimated a 0.007% potential increase risk in neonates and 0.001% potential increased risk in teenagers over the base risk. Multiple head CTs in children equates to a slight potential increase risk in lifetime attributable risk over the baseline risk for cancer, slightly higher in neonates relative to teenagers. The potential risks versus clinical benefit must be assessed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Iatrogenic radiation exposure to patients with early onset spine and chest wall deformities.
Khorsand, Derek; Song, Kit M; Swanson, Jonathan; Alessio, Adam; Redding, Gregory; Waldhausen, John
2013-08-01
Retrospective cohort series. Characterize average iatrogenic radiation dose to a cohort of children with thoracic insufficiency syndrome (TIS) during assessment and treatment at a single center with vertically expandable prosthetic titanium rib. Children with TIS undergo extensive evaluations to characterize their deformity. No standardized radiographical evaluation exists, but all reports use extensive imaging. The source and level of radiation these patients receive is not currently known. We evaluated a retrospective consecutive cohort of 62 children who had surgical treatment of TIS at our center from 2001-2011. Typical care included obtaining serial radiographs, spine and chest computed tomographic (CT) scans, ventilation/perfusion scans, and magnetic resonance images. Epochs of treatment were divided into time of initial evaluation to the end of initial vertically expandable prosthetic titanium rib implantation with each subsequent epoch delineated by the next surgical intervention. The effective dose for each examination was estimated within millisieverts (mSv). Plain radiographs were calculated from references. Effective dose was directly estimated for CT scans since 2007 and an average of effective dose from 2007-2011 was used for scans before 2007. Effective dose from fluoroscopy was directly estimated. All doses were reported in mSv. A cohort of 62 children had a total of 447 procedures. There were a total of 290 CT scans, 4293 radiographs, 147 magnetic resonance images, and 134 ventilation/perfusion scans. The average accumulated effective dose was 59.6 mSv for children who had completed all treatment, 13.0 mSv up to initial surgery, and 3.2 mSv for each subsequent epoch of treatment. CT scans accounted for 74% of total radiation dose. Children managed for TIS using a consistent protocol received iatrogenic radiation doses that were on average 4 times the estimated average US background radiation exposure of 3 mSv/yr. CT scans comprised 74% of the total dose. 3.
NASA Technical Reports Server (NTRS)
Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.
1984-01-01
The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.
Xu, Wenzhao; Collingsworth, Paris D.; Bailey, Barbara; Carlson Mazur, Martha L.; Schaeffer, Jeff; Minsker, Barbara
2017-01-01
This paper proposes a geospatial analysis framework and software to interpret water-quality sampling data from towed undulating vehicles in near-real time. The framework includes data quality assurance and quality control processes, automated kriging interpolation along undulating paths, and local hotspot and cluster analyses. These methods are implemented in an interactive Web application developed using the Shiny package in the R programming environment to support near-real time analysis along with 2- and 3-D visualizations. The approach is demonstrated using historical sampling data from an undulating vehicle deployed at three rivermouth sites in Lake Michigan during 2011. The normalized root-mean-square error (NRMSE) of the interpolation averages approximately 10% in 3-fold cross validation. The results show that the framework can be used to track river plume dynamics and provide insights on mixing, which could be related to wind and seiche events.
Shang, Songhao
2012-01-01
Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572
Ball-morph: definition, implementation, and comparative evaluation.
Whited, Brian; Rossignac, Jaroslaw Jarek
2011-06-01
We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, Remi
1992-01-01
An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.
Braunschweig, Carol A; Sheean, Patricia M; Peterson, Sarah J; Gomez Perez, Sandra; Freels, Sally; Troy, Karen L; Ajanaku, Folabomi C; Patel, Ankur; Sclamberg, Joy S; Wang, Zebin
2014-09-01
Assessment of nutritional status in intensive care unit (ICU) patients is limited. Computed tomography (CT) scans that include the first to fifth lumbar region completed for diagnostic purposes measures fat and lean body mass (LBM) depots and are frequently done in ICU populations and can be used to quantify fat and LBM depots. The purpose of this study was to assess if these scans could measure change in skeletal muscle (SKT), visceral adipose (VAT), and intermuscular adipose (IMAT) tissue and to examine the association between the amount of energy and protein received and changes in these depots. Cross-sectional area of SKT, VAT, and IMAT from CT scans at the third lumbar region was quantified at 2 time points (CT1 and CT2). Change scores between CT1 and CT2 for each of these depots and the percentage of estimated energy/protein needs received were determined in 33 adults that with acute respiratory failure. Descriptive statistics and multiple regression was used to evaluate the influence of baseline characteristics and the percentage energy/protein needs received between CT1 and CT2 on percentage change/day between CT1 and CT2 on SKM, IMAT, and VAT. Participants were on average (SD) 59.7 (16) years old, received 41% of energy and 57% of protein needs. The average time between CT1 and CT2 was 10 (5) days. SKM declined 0.49%/day (men P = .07, women P = .09) and percentage of energy needs received reduced loss (β = 0.024, P = .03). No change in VAT or IMAT occurred. CT scans can be exploited to assess change in body composition in ICU patients and may assist in detecting the causal link between nutritional support and outcomes in future clinical trials. © 2013 American Society for Parenteral and Enteral Nutrition.
Mann, Steve D.; Perez, Kristy L.; McCracken, Emily K. E.; Shah, Jainil P.; Wong, Terence Z.; Tornai, Martin P.
2012-01-01
A pilot study is underway to quantify in vivo the uptake and distribution of Tc-99m Sestamibi in subjects without previous history of breast cancer using a dedicated SPECT-CT breast imaging system. Subjects undergoing diagnostic parathyroid imaging studies were consented and imaged as part of this IRB-approved breast imaging study. For each of the seven subjects, one randomly selected breast was imaged prone-pendant using the dedicated, compact breast SPECT-CT system underneath the shielded patient support. Iteratively reconstructed and attenuation and/or scatter corrected images were coregistered; CT images were segmented into glandular and fatty tissue by three different methods; the average concentration of Sestamibi was determined from the SPECT data using the CT-based segmentation and previously established quantification techniques. Very minor differences between the segmentation methods were observed, and the results indicate an average image-based in vivo Sestamibi concentration of 0.10 ± 0.16 μCi/mL with no preferential uptake by glandular or fatty tissues. PMID:22956950
Pediatric chest and abdominopelvic CT: organ dose estimation based on 42 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Paulson, Erik K; Frush, Donald P; Samei, Ehsan
2014-02-01
To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. The institutional review board approved this HIPAA-compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0-16 years; weight range, 2-80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDI(vol)). The relationships between CTDI(vol)-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. For organs within the image coverage, CTDI(vol)-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R(2) > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%-32%) mainly because of the effect of overranging. It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDI(vol). These CTDI(vol)-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013.
Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.
Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K
2017-07-01
The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.
AMIDE: a free software tool for multimodality medical image analysis.
Loening, Andreas Markus; Gambhir, Sanjiv Sam
2003-07-01
Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
Ben-Shlomo, A; Cohen, D; Bruckheimer, E; Bachar, G N; Konstantinovsky, R; Birk, E; Atar, E
2016-05-01
To compare the effective doses of needle biopsies based on dose measurements and simulations using adult and pediatric phantoms, between cone beam c-arm CT (CBCT) and CT. Effective doses were calculated and compared based on measurements and Monte Carlo simulations of CT- and CBCT-guided biopsy procedures of the lungs, liver, and kidney using pediatric and adult phantoms. The effective doses for pediatric and adult phantoms, using our standard protocols for upper, middle and lower lungs, liver, and kidney biopsies, were significantly lower under CBCT guidance than CT. The average effective dose for a 5-year old for these five biopsies was 0.36 ± 0.05 mSv with the standard CBCT exposure protocols and 2.13 ± 0.26 mSv with CT. The adult average effective dose for the five biopsies was 1.63 ± 0.22 mSv with the standard CBCT protocols and 8.22 ± 1.02 mSv using CT. The CT effective dose was higher than CBCT protocols for child and adult phantoms by 803 and 590% for upper lung, 639 and 525% for mid-lung, and 461 and 251% for lower lung, respectively. Similarly, the effective dose was higher by 691 and 762% for liver and 513 and 608% for kidney biopsies. Based on measurements and simulations with pediatric and adult phantoms, radiation effective doses during image-guided needle biopsies of the lung, liver, and kidney are significantly lower with CBCT than with CT.
Assessing stapes piston position using computed tomography: a cadaveric study.
Hahn, Yoav; Diaz, Rodney; Hartman, Jonathan; Bobinski, Matthew; Brodie, Hilary
2009-02-01
Temporal bone computed tomographic (CT) scanning in the postoperative stapedotomy patient is inaccurate in assessing stapes piston position within the vestibule. Poststapedotomy patients that have persistent vertigo may undergo CT scanning to assess the position of the stapes piston within the vestibule to rule out overly deep insertion. Vertigo is a recognized complication of the deep piston, and CT evaluation is often recommended. The accuracy of CT scan in this setting is unestablished. Stapedotomy was performed on 12 cadaver ears, and stainless steel McGee pistons were placed. The cadaver heads were then scanned using a fine-cut temporal bone protocol. Temporal bone dissection was performed with microscopic measurement of the piston depth in the vestibule. These values were compared with depth of intravestibular penetration measured on CT scan by 4 independent measurements. The intravestibular penetration as assessed by computed tomography was consistently greater than the value found on cadaveric anatomic dissection. The radiographic bias was greater when piston location within the vestibule was shallower. The axial CT scan measurement was 0.53 mm greater, on average, than the anatomic measurement. On average, the coronal CT measurement was 0.68 mm greater than the anatomic measurement. The degree of overestimation of penetration, however, was highly inconsistent. Standard temporal bone CT scan is neither an accurate nor precise examination of stapes piston depth within the vestibule. We found that CT measurement consistently overstated intravestibular piston depth. Computed tomography is not a useful study in the evaluation of piston depth for poststapedectomy vertigo and is of limited value in this setting.
Pathak, A K; Dutta, Narayan; Pattanaik, A K; Chaturvedi, V B; Sharma, K
2017-12-01
The study examined the effect of condensed tannins (CT) containing Ficus infectoria and Psidium guajava leaf meal mixture (LMM) supplementation on nutrient metabolism, methane emission and performance of lambs. Twenty four lambs of ~6 months age (average body weight 10.1±0.60 kg) were randomly divided into 4 dietary treatments (CT-0, CT-1, CT-1.5, and CT-2 containing 0, 1.0, 1.5, and 2.0 percent CT through LMM, respectively) consisting of 6 lambs each in a completely randomized design. All the lambs were offered a basal diet of wheat straw ad libitum, oat hay (100 g/d) along with required amount of concentrate mixture to meet their nutrient requirements for a period of 6 months. After 3 months of experimental feeding, a metabolism trial of 6 days duration was conducted on all 24 lambs to determine nutrient digestibility and nitrogen balance. Urinary excretion of purine derivatives and microbial protein synthesis were determined using high performance liquid chromatography. Respiration chamber study was started at the mid of 5th month of experimental feeding trial. Whole energy balance trials were conducted on individual lamb one after the other, in an open circuit respiration calorimeter. Intake of dry matter and organic matter (g/d) was significantly (p<0.05) higher in CT-1.5 than control. Digestibility of various nutrients did not differ irrespective of treatments. Nitrogen retention and microbial nitrogen synthesis (g/d) was significantly (p<0.01) higher in CT-1.5 and CT-2 groups relative to CT-0. Total body weight gain (kg) and average daily gain (g) were significantly (linear, p<0.01) higher in CT-1.5 followed by CT-1 and CT-0, respectively. Feed conversion ratio (FCR) by lambs was significantly (linear, p<0.01) better in CT-1.5 followed by CT-2 and CT-0, respectively. Total wool yield (g; g/d) was linearly (p<0.05) higher for CT-1.5 than CT-0. Methane emission was linearly decreased (p<0.05) in CT groups and reduction was highest (p<0.01) in CT-2 followed by CT-1.5 and CT-1. Methane energy (kcal/d) was linearly decreased (p<0.05) in CT groups. The CT supplementation at 1% to 2% of the diet through Ficus infectoria and Psidium guajava LMM significantly improved nitrogen metabolism, growth performance, wool yield, FCR and reduced methane emission by lambs.
Bhojani, Naeem; Paonessa, Jessica E; El Tayeb, Marawan M; Williams, James C; Hameed, Tariq A; Lingeman, James E
2018-04-03
To compare the sensitivity of noncontrast computed tomography (CT) with endoscopy for detection of renal calculi. Imaging modalities for detection of nephrolithiasis have centered on abdominal x-ray, ultrasound, and noncontrast CT. Sensitivities of 58%-62% (abdominal x-ray), 45% (ultrasound), and 95%-100% (CT) have been previously reported. However, these results have never been correlated with endoscopic findings. Idiopathic calcium oxalate stone formers with symptomatic calculi requiring ureteroscopy were studied. At the time of surgery, the number and the location of all calculi within the kidney were recorded followed by basket retrieval. Each calculus was measured and sent for micro-CT and infrared spectrophotometry. All CT scans were reviewed by the same genitourinary radiologist who was blinded to the endoscopic findings. The radiologist reported on the number, location, and size of each calculus. Eighteen renal units were studied in 11 patients. Average time from CT scan to ureteroscopy was 28.6 days. The mean number of calculi identified per kidney was 9.2 ± 6.1 for endoscopy and 5.9 ± 4.1 for CT (P <.004). The mean size of total renal calculi (sum of the longest stone diameters) per kidney was 22.4 ± 17.1 mm and 18.2 ± 13.2 mm for endoscopy and CT, respectively (P = .06). CT scan underreports the number of renal calculi, probably missing some small stones and being unable to distinguish those lying in close proximity to one another. However, the total stone burden seen by CT is, on average, accurate when compared with that found on endoscopic examination. Copyright © 2018 Elsevier Inc. All rights reserved.
Mendelsohn, Daniel; Strelzow, Jason; Dea, Nicolas; Ford, Nancy L; Batke, Juliet; Pennington, Andrew; Yang, Kaiyun; Ailon, Tamir; Boyd, Michael; Dvorak, Marcel; Kwon, Brian; Paquette, Scott; Fisher, Charles; Street, John
2016-03-01
Imaging modalities used to visualize spinal anatomy intraoperatively include X-ray studies, fluoroscopy, and computed tomography (CT). All of these emit ionizing radiation. Radiation emitted to the patient and the surgical team when performing surgeries using intraoperative CT-based spine navigation was compared. This is a retrospective cohort case-control study. Seventy-three patients underwent CT-navigated spinal instrumentation and 73 matched controls underwent spinal instrumentation with conventional fluoroscopy. Effective doses of radiation to the patient when the surgical team was inside and outside of the room were analyzed. The number of postoperative imaging investigations between navigated and non-navigated cases was compared. Intraoperative X-ray imaging, fluoroscopy, and CT dosages were recorded and standardized to effective doses. The number of postoperative imaging investigations was compared with the matched cohort of surgical cases. A literature review identified historical radiation exposure values for fluoroscopic-guided spinal instrumentation. The 73 navigated operations involved an average of 5.44 levels of instrumentation. Thoracic and lumbar instrumentations had higher radiation emission from all modalities (CT, X-ray imaging, and fluoroscopy) compared with cervical cases (6.93 millisievert [mSv] vs. 2.34 mSv). Major deformity and degenerative cases involved more radiation emission than trauma or oncology cases (7.05 mSv vs. 4.20 mSv). On average, the total radiation dose to the patient was 8.7 times more than the radiation emitted when the surgical team was inside the operating room. Total radiation exposure to the patient was 2.77 times the values reported in the literature for thoracolumbar instrumentations performed without navigation. In comparison, the radiation emitted to the patient when the surgical team was inside the operating room was 2.50 lower than non-navigated thoracolumbar instrumentations. The average total radiation exposure to the patient was 5.69 mSv, a value less than a single routine lumbar CT scan (7.5 mSv). The average radiation exposure to the patient in the present study was approximately one quarter the recommended annual occupational radiation exposure. Navigation did not reduce the number of postoperative X-rays or CT scans obtained. Intraoperative CT navigation increases the radiation exposure to the patient and reduces the radiation exposure to the surgeon when compared with values reported in the literature. Intraoperative CT navigation improves the accuracy of spine instrumentation with acceptable patient radiation exposure and reduced surgical team exposure. Surgeons should be aware of the implications of radiation exposure to both the patient and the surgical team when using intraoperative CT navigation. Copyright © 2016 Elsevier Inc. All rights reserved.
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington
Johnson, Kenneth H.
2016-09-27
This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.
Does Iterative Reconstruction Lower CT Radiation Dose: Evaluation of 15,000 Examinations
Noël, Peter B.; Renger, Bernhard; Fiebich, Martin; Münzel, Daniela; Fingerle, Alexander A.; Rummeny, Ernst J.; Dobritz, Martin
2013-01-01
Purpose Evaluation of 15,000 computed tomography (CT) examinations to investigate if iterative reconstruction (IR) reduces sustainably radiation exposure. Method and Materials Information from 15,000 CT examinations was collected, including all aspects of the exams such as scan parameter, patient information, and reconstruction instructions. The examinations were acquired between January 2010 and December 2012, while after 15 months a first generation IR algorithm was installed. To collect the necessary information from PACS, RIS, MPPS and structured reports a Dose Monitoring System was developed. To harvest all possible information an optical character recognition system was integrated, for example to collect information from the screenshot CT-dose report. The tool transfers all data to a database for further processing such as the calculation of effective dose and organ doses. To evaluate if IR provides a sustainable dose reduction, the effective dose values were statistically analyzed with respect to protocol type, diagnostic indication, and patient population. Results IR has the potential to reduce radiation dose significantly. Before clinical introduction of IR the average effective dose was 10.1±7.8mSv and with IR 8.9±7.1mSv (p*=0.01). Especially in CTA, with the possibility to use kV reduction protocols, such as in aortic CTAs (before IR: average14.2±7.8mSv; median11.4mSv /with IR:average9.9±7.4mSv; median7.4mSv), or pulmonary CTAs (before IR: average9.7±6.2mSV; median7.7mSv /with IR: average6.4±4.7mSv; median4.8mSv) the dose reduction effect is significant(p*=0.01). On the contrary for unenhanced low-dose scans of the cranial (for example sinuses) the reduction is not significant (before IR:average6.6±5.8mSv; median3.9mSv/with IR:average6.0±3.1mSV; median3.2mSv). Conclusion The dose aspect remains a priority in CT research. Iterative reconstruction algorithms reduce sustainably and significantly radiation dose in the clinical routine. Our results illustrate that not only in studies with a limited number of patients but also in the clinical routine, IRs provide long-term dose saving. PMID:24303035
Does iterative reconstruction lower CT radiation dose: evaluation of 15,000 examinations.
Noël, Peter B; Renger, Bernhard; Fiebich, Martin; Münzel, Daniela; Fingerle, Alexander A; Rummeny, Ernst J; Dobritz, Martin
2013-01-01
Evaluation of 15,000 computed tomography (CT) examinations to investigate if iterative reconstruction (IR) reduces sustainably radiation exposure. Information from 15,000 CT examinations was collected, including all aspects of the exams such as scan parameter, patient information, and reconstruction instructions. The examinations were acquired between January 2010 and December 2012, while after 15 months a first generation IR algorithm was installed. To collect the necessary information from PACS, RIS, MPPS and structured reports a Dose Monitoring System was developed. To harvest all possible information an optical character recognition system was integrated, for example to collect information from the screenshot CT-dose report. The tool transfers all data to a database for further processing such as the calculation of effective dose and organ doses. To evaluate if IR provides a sustainable dose reduction, the effective dose values were statistically analyzed with respect to protocol type, diagnostic indication, and patient population. IR has the potential to reduce radiation dose significantly. Before clinical introduction of IR the average effective dose was 10.1±7.8mSv and with IR 8.9±7.1mSv (p*=0.01). Especially in CTA, with the possibility to use kV reduction protocols, such as in aortic CTAs (before IR: average14.2±7.8mSv; median11.4mSv /with IR:average9.9±7.4mSv; median7.4mSv), or pulmonary CTAs (before IR: average9.7±6.2mSV; median7.7mSv /with IR: average6.4±4.7mSv; median4.8mSv) the dose reduction effect is significant(p*=0.01). On the contrary for unenhanced low-dose scans of the cranial (for example sinuses) the reduction is not significant (before IR:average6.6±5.8mSv; median3.9mSv/with IR:average6.0±3.1mSV; median3.2mSv). The dose aspect remains a priority in CT research. Iterative reconstruction algorithms reduce sustainably and significantly radiation dose in the clinical routine. Our results illustrate that not only in studies with a limited number of patients but also in the clinical routine, IRs provide long-term dose saving.
Airborne laser scanning for forest health status assessment and radiative transfer modelling
NASA Astrophysics Data System (ADS)
Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena
2013-04-01
Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-07-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams.
Audiffren, Julien; Contal, Emile
2016-08-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams
Audiffren, Julien; Contal, Emile
2016-01-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB. PMID:27490545
Target volume and artifact evaluation of a new data-driven 4D CT.
Martin, Rachael; Pan, Tinsu
Four-dimensional computed tomography (4D CT) is often used to define the internal gross target volume (IGTV) for radiation therapy of lung cancer. Traditionally, this technique requires the use of an external motion surrogate; however, a new image, data-driven 4D CT, has become available. This study aims to describe this data-driven 4D CT and compare target contours created with it to those created using standard 4D CT. Cine CT data of 35 patients undergoing stereotactic body radiation therapy were collected and sorted into phases using standard and data-driven 4D CT. IGTV contours were drawn using a semiautomated method on maximum intensity projection images of both 4D CT methods. Errors resulting from reproducibility of the method were characterized. A comparison of phase image artifacts was made using a normalized cross-correlation method that assigned a score from +1 (data-driven "better") to -1 (standard "better"). The volume difference between the data-driven and standard IGTVs was not significant (data driven was 2.1 ± 1.0% smaller, P = .08). The Dice similarity coefficient showed good similarity between the contours (0.949 ± 0.006). The mean surface separation was 0.4 ± 0.1 mm and the Hausdorff distance was 3.1 ± 0.4 mm. An average artifact score of +0.37 indicated that the data-driven method had significantly fewer and/or less severe artifacts than the standard method (P = 1.5 × 10 -5 for difference from 0). On average, the difference between IGTVs derived from data-driven and standard 4D CT was not clinically relevant or statistically significant, suggesting data-driven 4D CT can be used in place of standard 4D CT without adjustments to IGTVs. The relatively large differences in some patients were usually attributed to limitations in automatic contouring or differences in artifacts. Artifact reduction and setup simplicity suggest a clinical advantage to data-driven 4D CT. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Maspero, Matteo; van den Berg, Cornelis A. T.; Landry, Guillaume; Belka, Claus; Parodi, Katia; Seevinck, Peter R.; Raaymakers, Bas W.; Kurz, Christopher
2017-12-01
A magnetic resonance (MR)-only radiotherapy workflow can reduce cost, radiation exposure and uncertainties introduced by CT-MRI registration. A crucial prerequisite is generating the so called pseudo-CT (pCT) images for accurate dose calculation and planning. Many pCT generation methods have been proposed in the scope of photon radiotherapy. This work aims at verifying for the first time whether a commercially available photon-oriented pCT generation method can be employed for accurate intensity-modulated proton therapy (IMPT) dose calculation. A retrospective study was conducted on ten prostate cancer patients. For pCT generation from MR images, a commercial solution for creating bulk-assigned pCTs, called MR for Attenuation Correction (MRCAT), was employed. The assigned pseudo-Hounsfield Unit (HU) values were adapted to yield an increased agreement to the reference CT in terms of proton range. Internal air cavities were copied from the CT to minimise inter-scan differences. CT- and MRCAT-based dose calculations for opposing beam IMPT plans were compared by gamma analysis and evaluation of clinically relevant target and organ at risk dose volume histogram (DVH) parameters. The proton range in beam’s eye view (BEV) was compared using single field uniform dose (SFUD) plans. On average, a (2%, 2 mm) gamma pass rate of 98.4% was obtained using a 10% dose threshold after adaptation of the pseudo-HU values. Mean differences between CT- and MRCAT-based dose in the DVH parameters were below 1 Gy (<1.5% ). The median proton range difference was 0.1 mm, with on average 96% of all BEV dose profiles showing a range agreement better than 3 mm. Results suggest that accurate MR-based proton dose calculation using an automatic commercial bulk-assignment pCT generation method, originally designed for photon radiotherapy, is feasible following adaptation of the assigned pseudo-HU values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Hossain, S; Algan, O
Purpose: To investigate quantitatively positioning and dosimetric uncertainties due to 4D-CT intra-phase motion in the internal-target-volume (ITV) associated with radiation therapy using respiratory-gating for patients setup with image-guidance-radiation-therapy (IGRT) using free-breathing or average-phase CT-images. Methods: A lung phantom with an embedded tissue-equivalent target is imaged with CT while it is stationary and moving. Four-sets of structures are outlined: (a) the actual target on CT-images of the stationary-target, (b) ITV on CT-images for the free-moving phantom, (c) ITV’s from the ten different phases (10–100%) and (d) ITV on the CT-images generated from combining 3 phases: 40%–50%–60%. The variations in volume, lengthmore » and center-position of the ITV’s and their effects on dosimetry during dose delivery for patients setup with image-guidance are investigated. Results: Intra-phase motion due to breathing affects the volume, center position and length of the ITVs from different respiratory-phases. The ITV’s vary by about 10% from one phase to another. The largest ITV is measured on the free breathing CT images and the smallest is on the stationary CT-images. The ITV lengths vary by about 4mm where it may shrink or elongated depending on the motion-phase. The center position of the ITV varies between the different motion-phases which shifts upto 10mm from the stationary-position which is nearly equal to motion-amplitude. This causes systematic shifts during dose delivery with beam gating using certain phases (40%–50%–60%) for patients setup with IGRT using free-breathing or average-phase CT-images. The dose coverage of the ITV depends on the margins used for treatment-planning-volume where margins larger than the motion-amplitudes are needed to ensure dose coverage of the ITV. Conclusion: Volume, length, and center position of the ITV’s change between the different motion phases. Large systematic shifts are induced by respiratory-gating with ITVs on certain phases when patients are setup with IGRT using free-breathing or average-phase CT-images.« less
Alberich-Bayarri, A; Martí-Bonmatí, L; Sanz-Requena, R; Sánchez-González, J; Hervás Briz, V; García-Martí, G; Pérez, M Á
2014-01-01
We used an animal model to analyze the reproducibility and accuracy of certain biomarkers of bone image quality in comparison to a gold standard of computed microtomography (μCT). We used magnetic resonance (MR) imaging and μCT to study the metaphyses of 5 sheep tibiae. The MR images (3 Teslas) were acquired with a T1-weighted gradient echo sequence and an isotropic spatial resolution of 180μm. The μCT images were acquired using a scanner with a spatial resolution of 7.5μm isotropic voxels. In the preparation of the images, we applied equalization, interpolation, and thresholding algorithms. In the quantitative analysis, we calculated the percentage of bone volume (BV/TV), the trabecular thickness (Tb.Th), the trabecular separation (Tb.Sp), the trabecular index (Tb.N), the 2D fractal dimension (D(2D)), the 3D fractal dimension (D(3D)), and the elastic module in the three spatial directions (Ex, Ey and Ez). The morphometric and mechanical quantification of trabecular bone by MR was very reproducible, with percentages of variation below 9% for all the parameters. Its accuracy compared to the gold standard (μCT) was high, with errors less than 15% for BV/TV, D(2D), D(3D), and E(app)x, E(app)y and E(app)z. Our experimental results in animals confirm that the parameters of BV/TV, D(2D), D(3D), and E(app)x, E(app)y and E(app)z obtained by MR have excellent reproducibility and accuracy and can be used as imaging biomarkers for the quality of trabecular bone. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.
Deep-learning derived features for lung nodule classification with limited datasets
NASA Astrophysics Data System (ADS)
Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.
2018-02-01
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
Van Doormaal, Mark; Zhou, Yu-Qing; Zhang, Xiaoli; Steinman, David A; Henkelman, R Mark
2014-10-01
Mouse models are an important way for exploring relationships between blood hemodynamics and eventual plaque formation. We have developed a mouse model of aortic regurgitation (AR) that produces large changes in plaque burden with charges in hemodynamics [Zhou et al., 2010, "Aortic Regurgitation Dramatically Alters the Distribution of Atherosclerotic Lesions and Enhances Atherogenesis in Mice," Arterioscler. Thromb. Vasc. Biol., 30(6), pp. 1181-1188]. In this paper, we explore the amount of detail needed for realistic computational fluid dynamics (CFD) calculations in this experimental model. The CFD calculations use inputs based on experimental measurements from ultrasound (US), micro computed tomography (CT), and both anatomical magnetic resonance imaging (MRI) and phase contrast MRI (PC-MRI). The adequacy of five different levels of model complexity (a) subject-specific CT data from a single mouse; (b) subject-specific CT centerlines with radii from US; (c) same as (b) but with MRI derived centerlines; (d) average CT centerlines and averaged vessel radius and branching vessels; and (e) same as (d) but with averaged MRI centerlines) is evaluated by demonstrating their impact on relative residence time (RRT) outputs. The paper concludes by demonstrating the necessity of subject-specific geometry and recommends for inputs the use of CT or anatomical MRI for establishing the aortic centerlines, M-mode US for scaling the aortic diameters, and a combination of PC-MRI and Doppler US for estimating the spatial and temporal characteristics of the input wave forms.
Tishchenko, Oksana; Truhlar, Donald G
2010-02-28
This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.
Using rainfall radar data to improve interpolated maps of dose rate in the Netherlands.
Hiemstra, Paul H; Pebesma, Edzer J; Heuvelink, Gerard B M; Twenhöfel, Chris J W
2010-12-01
The radiation monitoring network in the Netherlands is designed to detect and track increased radiation levels, dose rate more specifically, in 10-minute intervals. The network consists of 153 monitoring stations. Washout of radon progeny by rainfall is the most important cause of natural variations in dose rate. The increase in dose rate at a given time is a function of the amount of progeny decaying, which in turn is a balance between deposition of progeny by rainfall and radioactive decay. The increase in progeny is closely related to average rainfall intensity over the last 2.5h. We included decay of progeny by using weighted averaged rainfall intensity, where the weight decreases back in time. The decrease in weight is related to the half-life of radon progeny. In this paper we show for a rainstorm on the 20th of July 2007 that weighted averaged rainfall intensity estimated from rainfall radar images, collected every 5min, performs much better as a predictor of increases in dose rate than using the non-averaged rainfall intensity. In addition, we show through cross-validation that including weighted averaged rainfall intensity in an interpolated map using universal kriging (UK) does not necessarily lead to a more accurate map. This might be attributed to the high density of monitoring stations in comparison to the spatial extent of a typical rain event. Reducing the network density improved the accuracy of the map when universal kriging was used instead of ordinary kriging (no trend). Consequently, in a less dense network the positive influence of including a trend is likely to increase. Furthermore, we suspect that UK better reproduces the sharp boundaries present in rainfall maps, but that the lack of short-distance monitoring station pairs prevents cross-validation from revealing this effect. Copyright © 2010 Elsevier B.V. All rights reserved.
Esophageal motion during radiotherapy: quantification and margin implications.
Cohen, R J; Paskalev, K; Litwin, S; Price, R A; Feigenberg, S J; Konski, A A
2010-08-01
The purpose was to evaluate interfraction and intrafraction esophageal motion in the right-left (RL) and anterior-posterior (AP) directions using computed tomography (CT) in esophageal cancer patients. Eight patients underwent CT simulation and CT-on-rails imaging before and after radiotherapy. Interfraction displacement was defined as differences between pretreatment and simulation images. Intrafraction displacement was defined as differences between pretreatment and posttreatment images. Images were fused using bone registries, adjusted to the carina. The mean, average of the absolute, and range of esophageal motion were calculated in the RL and AP directions, above and below the carina. Thirty-one CT image sets were obtained. The incidence of esophageal interfraction motion > or =5 mm was 24% and > or =10 mm was 3%; intrafraction motion > or =5 mm was 13% and > or =10 mm was 4%. The average RL motion was 1.8 +/- 5.1 mm, favoring leftward movement, and the average AP motion was 0.6 +/- 4.8 mm, favoring posterior movement. Average absolute motion was 4.2 mm or less in the RL and AP directions. Motion was greatest in the RL direction above the carina. Coverage of 95% of esophageal mobility requires 12 mm left, 8 mm right, 10 mm posterior, and 9 mm anterior margins. In all directions, the average of the absolute interfraction and intrafraction displacement was 4.2 mm or less. These results support a 12 mm left, 8 mm right, 10 mm posterior, and 9 mm anterior margin for internal target volume (ITV) and can guide margins for future intensity modulated radiation therapy (IMRT) trials to account for organ motion and set up error in three-dimensional planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori
The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{submore » 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose-prescription method employed.« less
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Effective and organ doses from common CT examinations in one general hospital in Tehran, Iran
NASA Astrophysics Data System (ADS)
Khoramian, Daryoush; Hashemi, Bijan
2017-09-01
Purpose: It is well known that the main portion of artificial sources of ionizing radiation to human results from X-ray imaging techniques. However, reports carried out in various countries have indicated that most of their cumulative doses from artificial sources are due to CT examinations. Hence assessing doses resulted from CT examinations is highly recommended by national and international radiation protection agencies. The aim of this research has been to estimate the effective and organ doses in an average human according to 103 and 60 ICRP tissue weighting factor for six common protocols of Multi-Detector CT (MDCT) machine in a comprehensive training general hospital in Tehran/Iran. Methods: To calculate the patients' effective dose, the CT-Expo2.2 software was used. Organs/tissues and effective doses were determined for about 20 patients (totally 122 patients) for every one of six typical CT protocols of the head, neck, chest, abdomen-pelvis, pelvis and spine exams. In addition, the CT dosimetry index (CTDI) was measured in the standard 16 and 32 cm phantoms by using a calibrated pencil ionization chamber for the six protocols and by taking the average value of CT scan parameters used in the hospital compared with the CTDI values displayed on the console device of the machine. Results: The values of the effective dose based on the ICRP 103 tissue weighting factor were: 0.6, 2.0, 3.2, 4.2, 2.8, and 3.9 mSv and based on the ICRP 60 tissue weighting factor were: 0.9, 1.4, 3, 7.9, 4.8 and 5.1 mSv for the head, neck, chest, abdomen-pelvis, pelvis, spine CT exams respectively. Relative differences between those values were -22, 21, 23, -6, -31 and 16 percent for the head, neck, chest, abdomen-pelvis, pelvis, spine CT exams, respectively. The average value of CTDIv calculated for each protocol was: 27.32 ± 0.9, 18.08 ± 2.0, 7.36 ± 2.6, 8.84 ± 1.7, 9.13 ± 1.5, 10.42 ± 0.8 mGy for the head, neck, chest, abdomen-pelvis and spine CT exams, respectively. Conclusions: The highest organ doses delivered by various CT exams were received by brain (15.5 mSv), thyroid (19.00 mSv), lungs (9.3 mSv) and bladder (9.9 mSv), bladder (10.4 mSv), stomach (10.9 mSv) in the head, neck, chest, and the abdomen-pelvis, pelvis, and spine respectively. Except the neck and spine CT exams showing a higher effective dose compared to that reported in Netherlands, other exams indicated lower values compared to those reported by any other country.
Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael
2018-06-01
To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Strobel, Klaus; Rüdy, Matthias; Treyer, Valerie; Veit-Haibach, Patrick; Burger, Cyrill; Hany, Thomas F
2007-07-01
The relative advantage of fully 3-D versus 2-D mode for whole-body imaging is currently the focus of considerable expert debate. The nature of 3-D PET acquisition for FDG PET/CT theoretically allows a shorter scan time and improved efficiency of FDG use than in the standard 2-D acquisition. We therefore objectively and subjectively compared standard 2-D and fully 3-D reconstructed data for FDG PET/CT on a research PET/CT system. In a total of 36 patients (mean 58.9 years, range 17.3-78.9 years; 21 male, 15 female) referred for known or suspected malignancy, FDG PET/CT was performed using a research PET/CT system with advanced detector technology with improved sensitivity and spatial resolution. After 45 min uptake, a low-dose CT (40 mAs) from head to thigh was performed followed by 2-D PET (emission 3 min per field) and 3-D PET (emission 1.5 min per field) with both seven slices overlap to cover the identical anatomical region. Acquisition time was therefore 50% less (seven fields; 21 min vs. 10.5 min). PET data was acquired in a randomized fashion, so in 50% of the cases 2-D data was acquired first. CT data was used for attenuation correction. 2-D (OSEM) and 3-D PET images were iteratively reconstructed. Subjective analysis of 2-D and 3-D images was performed by two readers in a blinded, randomized fashion evaluating the following criteria: sharpness of organs (liver, chest wall/lung), overall image quality and detectability and dignity of each identified lesion. Objective analysis of PET data was investigated measuring maximum standard uptake value with lean body mass (SUV(max,LBM)) of identified lesions. On average, per patient, the SUV(max) was 7.86 (SD 7.79) for 2-D and 6.96 (SD 5.19) for 3-D. On a lesion basis, the average SUV(max) was 7.65 (SD 7.79) for 2-D and 6.75 (SD 5.89) for 3-D. The absolute difference on a paired t-test of SUV 3-D-2-D based on each measured lesion was significant with an average of -0.956 (P=0.002) and an average of -0.884 on a patient base (P<0.05). With 3-D the SUV(max) decreased by an average of 5.2% for each lesion, and an average of 6.0% for each patient. Subjective analysis showed fair inter-observer agreement regarding detectability (kappa=0.24 for 3-D; 0.36 for 3-D) and dignity (kappa=0.44 for 3-D and 0.4 for 2-D) of the lesions. There was no significant diagnostic difference between 3-D and 2-D. Only in one patient, a satellite liver metastasis of a colon cancer was missed in 3-D and detected only in 2-D. On average, the overall image quality for 3-D images was equal (in 24%) or inferior (in 76%) compared to 2-D. A possible major advantage of 3-D data acquisition is the faster patient throughput with a 50% reduction in scan time. The fully 3-D reconstruction technique has overcome the technical drawbacks of current 3-D imaging technique. In our limited number of patients there was no significant diagnostic difference between 2-D and fully 3-D.
Ferrero, Andrea; Montoya, Juan C; Vaughan, Lisa E; Huang, Alice E; McKeag, Ian O; Enders, Felicity T; Williams, James C; McCollough, Cynthia H
2016-12-01
Previous studies have demonstrated a qualitative relationship between stone fragility and internal stone morphology. The goal of this study was to quantify morphologic features from dual-energy computed tomography (CT) images and assess their relationship to stone fragility. Thirty-three calcified urinary stones were scanned with micro-CT. Next, they were placed within torso-shaped water phantoms and scanned with the dual-energy CT stone composition protocol in routine use at our institution. Mixed low- and high-energy images were used to measure volume, surface roughness, and 12 metrics describing internal morphology for each stone. The ratios of low- to high-energy CT numbers were also measured. Subsequent to imaging, stone fragility was measured by disintegrating each stone in a controlled ex vivo experiment using an ultrasonic lithotripter and recording the time to comminution. A multivariable linear regression model was developed to predict time to comminution. The average stone volume was 300 mm 3 (range: 134-674 mm 3 ). The average comminution time measured ex vivo was 32 seconds (range: 7-115 seconds). Stone volume, dual-energy CT number ratio, and surface roughness were found to have the best combined predictive ability to estimate comminution time (adjusted R 2 = 0.58). The predictive ability of mixed dual-energy CT images, without use of the dual-energy CT number ratio, to estimate comminution time was slightly inferior, with an adjusted R 2 of 0.54. Dual-energy CT number ratios, volume, and morphologic metrics may provide a method for predicting stone fragility, as measured by time to comminution from ultrasonic lithotripsy. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Shlomo, A.; Cohen, D.; Bruckheimer, E.
PurposeTo compare the effective doses of needle biopsies based on dose measurements and simulations using adult and pediatric phantoms, between cone beam c-arm CT (CBCT) and CT.MethodEffective doses were calculated and compared based on measurements and Monte Carlo simulations of CT- and CBCT-guided biopsy procedures of the lungs, liver, and kidney using pediatric and adult phantoms.ResultsThe effective doses for pediatric and adult phantoms, using our standard protocols for upper, middle and lower lungs, liver, and kidney biopsies, were significantly lower under CBCT guidance than CT. The average effective dose for a 5-year old for these five biopsies was 0.36 ± 0.05 mSv withmore » the standard CBCT exposure protocols and 2.13 ± 0.26 mSv with CT. The adult average effective dose for the five biopsies was 1.63 ± 0.22 mSv with the standard CBCT protocols and 8.22 ± 1.02 mSv using CT. The CT effective dose was higher than CBCT protocols for child and adult phantoms by 803 and 590 % for upper lung, 639 and 525 % for mid-lung, and 461 and 251 % for lower lung, respectively. Similarly, the effective dose was higher by 691 and 762 % for liver and 513 and 608 % for kidney biopsies.ConclusionsBased on measurements and simulations with pediatric and adult phantoms, radiation effective doses during image-guided needle biopsies of the lung, liver, and kidney are significantly lower with CBCT than with CT.« less
Ferrero, Andrea; Montoya, Juan C.; Vaughan, Lisa E.; Huang, Alice E.; McKeag, Ian O.; Enders, Felicity T.; Williams, James C.; McCollough, Cynthia H.
2016-01-01
Rationale and Objectives Previous studies have demonstrated a qualitative relationship between stone fragility and internal stone morphology. The goal of this study was to quantify morphological features from dual-energy CT images and assess their relationship to stone fragility. Materials and Methods Thirty-three calcified urinary stones were scanned with micro CT. Next, they were placed within torso-shaped water phantoms and scanned with the dual-energy CT stone composition protocol in routine use at our institution. Mixed low-and high-energy images were used to measure volume, surface roughness, and 12 metrics describing internal morphology for each stone. The ratios of low- to high-energy CT numbers were also measured. Subsequent to imaging, stone fragility was measured by disintegrating each stone in a controlled ex vivo experiment using an ultrasonic lithotripter and recording the time to comminution. A multivariable linear regression model was developed to predict time to comminution. Results The average stone volume was 300 mm3 (range 134–674 mm3). The average comminution time measured ex vivo was 32 s (range 7–115 s). Stone volume, dual-energy CT number ratio and surface roughness were found to have the best combined predictive ability to estimate comminution time (adjusted R2= 0.58). The predictive ability of mixed dual-energy CT images, without use of the dual-energy CT number ratio, to estimate comminution time was slightly inferior, with an adjusted R2 of 0.54. Conclusion Dual-energy CT number ratios, volume, and morphological metrics may provide a method for predicting stone fragility, as measured by time to comminution from ultrasonic lithotripsy. PMID:27717761
Multislice spiral CT simulator for dynamic cardiopulmonary studies
NASA Astrophysics Data System (ADS)
De Francesco, Silvia; Ferreira da Silva, Augusto M.
2002-04-01
We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
Wood, T J; Moore, C S; Stephens, A; Saunderson, J R; Beavis, A W
2015-09-01
Given the increasing use of computed tomography (CT) in the UK over the last 30 years, it is essential to ensure that all imaging protocols are optimised to keep radiation doses as low as reasonably practicable, consistent with the intended clinical task. However, the complexity of modern CT equipment can make this task difficult to achieve in practice. Recent results of local patient dose audits have shown discrepancies between two Philips CT scanners that use the DoseRight 2.0 automatic exposure control (AEC) system in the 'automatic' mode of operation. The use of this system can result in drifting dose and image quality performance over time as it is designed to evolve based on operator technique. The purpose of this study was to develop a practical technique for configuring examination protocols on four CT scanners that use the DoseRight 2.0 AEC system in the 'manual' mode of operation. This method used a uniform phantom to generate reference images which form the basis for how the AEC system calculates exposure factors for any given patient. The results of this study have demonstrated excellent agreement in the configuration of the CT scanners in terms of average patient dose and image quality when using this technique. This work highlights the importance of CT protocol harmonisation in a modern Radiology department to ensure both consistent image quality and radiation dose. Following this study, the average radiation dose for a range of CT examinations has been reduced without any negative impact on clinical image quality.
Quantitative and qualitative computed tomographic characteristics of bronchiectasis in 12 dogs.
Cannon, Matthew S; Johnson, Lynelle R; Pesavento, Patricia A; Kass, Philip H; Wisner, Erik R
2013-01-01
Bronchiectasis is an irreversible dilatation of the bronchi resulting from chronic airway inflammation. In people, computed tomography (CT) has been described as the noninvasive gold standard for diagnosing bronchiectasis. In dogs, normal CT bronchoarterial ratios have been described as <2.0. The purpose of this retrospective study was to describe quantitative and qualitative CT characteristics of bronchiectasis in a cohort of dogs with confirmed disease. Inclusion criteria for the study were thoracic radiography, thoracic CT, and a diagnosis of bronchiectasis based on bronchoscopy and/or histopathology. For each included dog, a single observer measured CT bronchoarterial ratios at 6 lobar locations. Qualitative thoracic radiography and CT characteristics were recorded by consensus opinion of two board-certified veterinary radiologists. Twelve dogs met inclusion criteria. The mean bronchoarterial ratio from 28 bronchiectatic lung lobes was 2.71 ± 0.80 (range 1.4 to 4.33), and 23/28 measurements were >2.0. Averaged bronchoarterial ratios from bronchiectatic lung lobes were significantly larger (P < 0.01) than averaged ratios from nonbronchiectatic lung lobes. Qualitative CT characteristics of bronchiectasis included lack of peripheral airway tapering (12/12), lobar consolidation (11/12), bronchial wall thickening (7/12), and bronchial lumen occlusion (4/12). Radiographs detected lack of airway tapering in 7/12 dogs. In conclusion, the most common CT characteristics of bronchiectasis were dilatation, a lack of peripheral airway tapering, and lobar consolidation. Lack of peripheral airway tapering was not visible in thoracic radiographs for some dogs. For some affected dogs, bronchoarterial ratios were less than published normal values. © 2013 Veterinary Radiology & Ultrasound.
Patel, S; McLaughlin, J M
1999-05-01
To measure and compare central corneal thickness (CT) and intraocular pressure (IOP) in keratoconus and post-keratoplasty subjects and examine the CT-IOP relationship. 22 keratoconus (category I: six female sixteen male, average age 27.0 range 12-47) and 19 post-keratoplasty (category II: ten female nine male average age 34.6 range 16-54) patients without other anterior segment conditions were recruited. Only one, non-contact lens wearing, eye of the patient was included for analysis. Cornea was anaesthetised with non-preserved 0.4% Benoxinate Hydrochloride. Using a randomised approach, CT was measured using a standard ultrasonic pachymeter. IOP was then measured using a standard Goldmann tonometer. At all times the tonometrist remained unaware of the corneal thickness values. The mean (+/- s.d.) values for CT and IOP respectively in the two categories were: (I), 445 (45) mu and 9.8 (2.3) mmHg, (II), 564(44) microns and 15.8 (3.9) mmHg. Differences between I and II for both CT and IOP were significant (t-test, p = 0.01). Within each category, a significant correlation between CT and IOP was not found. Pooling all pairs of data (n = 41) a significant relationship between CT and IOP was detected (r = 0.635, p = 0.0001). The results confirm the hypothesis that an eye with a thicker cornea tends to present with a higher measured IOP. In the management of keratoconus and other corneal surgical procedures, changes in CT will contribute to any apparent changes in measured IOP.
Kornerup, Josefine S; Brodin, Patrik; Birk Christensen, Charlotte; Björk-Eriksson, Thomas; Kiil-Berthelsen, Anne; Borgwardt, Lise; Munck Af Rosenschöld, Per
2015-04-01
PET/CT may be more helpful than CT alone for radiation therapy planning, but the added risk due to higher doses of ionizing radiation is unknown. To estimate the risk of cancer induction and mortality attributable to the [F-18]2-fluoro-2-deoxyglucose (FDG) PET and CT scans used for radiation therapy planning in children with cancer, and compare to the risks attributable to the cancer treatment. Organ doses and effective doses were estimated for 40 children (2-18 years old) who had been scanned using PET/CT as part of radiation therapy planning. The risk of inducing secondary cancer was estimated using the models in BEIR VII. The prognosis of an induced cancer was taken into account and the reduction in life expectancy, in terms of life years lost, was estimated for the diagnostics and compared to the life years lost attributable to the therapy. Multivariate linear regression was performed to find predictors for a high contribution to life years lost from the radiation therapy planning diagnostics. The mean contribution from PET to the effective dose from one PET/CT scan was 24% (range: 7-64%). The average proportion of life years lost attributable to the nuclear medicine dose component from one PET/CT scan was 15% (range: 3-41%). The ratio of life years lost from the radiation therapy planning PET/CT scans and that of the cancer treatment was on average 0.02 (range: 0.01-0.09). Female gender was associated with increased life years lost from the scans (P < 0.001). Using FDG-PET/CT instead of CT only when defining the target volumes for radiation therapy of children with cancer does not notably increase the number of life years lost attributable to diagnostic examinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
Purpose: PET images are usually blurred due to the finite spatial resolution, while CT images suffer from low contrast. Segment a tumor from either a single PET or CT image is thus challenging. To make full use of the complementary information between PET and CT, we propose a novel variational method for simultaneous PET image restoration and PET/CT images co-segmentation. Methods: The proposed model was constructed based on the Γ-convergence approximation of Mumford-Shah (MS) segmentation model for PET/CT co-segmentation. Moreover, a PET de-blur process was integrated into the MS model to improve the segmentation accuracy. An interaction edge constraint termmore » over the two modalities were specially designed to share the complementary information. The energy functional was iteratively optimized using an alternate minimization (AM) algorithm. The performance of the proposed method was validated on ten lung cancer cases and five esophageal cancer cases. The ground truth were manually delineated by an experienced radiation oncologist using the complementary visual features of PET and CT. The segmentation accuracy was evaluated by Dice similarity index (DSI) and volume error (VE). Results: The proposed method achieved an expected restoration result for PET image and satisfactory segmentation results for both PET and CT images. For lung cancer dataset, the average DSI (0.72) increased by 0.17 and 0.40 than single PET and CT segmentation. For esophageal cancer dataset, the average DSI (0.85) increased by 0.07 and 0.43 than single PET and CT segmentation. Conclusion: The proposed method took full advantage of the complementary information from PET and CT images. This work was supported in part by the National Cancer Institute Grants R01CA172638. Shan Tan and Laquan Li were supported in part by the National Natural Science Foundation of China, under Grant Nos. 60971112 and 61375018.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Gregory; Grimes, Joshua; Fletcher, Joel
2016-01-15
Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kVmore » beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.« less
Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Halaweish, Ahmed; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia
2016-01-01
The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. The authors' report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.
Crowdsourcing lung nodules detection and annotation
NASA Astrophysics Data System (ADS)
Boorboor, Saeed; Nadeem, Saad; Park, Ji Hwan; Baker, Kevin; Kaufman, Arie
2018-03-01
We present crowdsourcing as an additional modality to aid radiologists in the diagnosis of lung cancer from clinical chest computed tomography (CT) scans. More specifically, a complete work flow is introduced which can help maximize the sensitivity of lung nodule detection by utilizing the collective intelligence of the crowd. We combine the concept of overlapping thin-slab maximum intensity projections (TS-MIPs) and cine viewing to render short videos that can be outsourced as an annotation task to the crowd. These videos are generated by linearly interpolating overlapping TS-MIPs of CT slices through the depth of each quadrant of a patient's lung. The resultant videos are outsourced to an online community of non-expert users who, after a brief tutorial, annotate suspected nodules in these video segments. Using our crowdsourcing work flow, we achieved a lung nodule detection sensitivity of over 90% for 20 patient CT datasets (containing 178 lung nodules with sizes between 1-30mm), and only 47 false positives from a total of 1021 annotations on nodules of all sizes (96% sensitivity for nodules>4mm). These results show that crowdsourcing can be a robust and scalable modality to aid radiologists in screening for lung cancer, directly or in combination with computer-aided detection (CAD) algorithms. For CAD algorithms, the presented work flow can provide highly accurate training data to overcome the high false-positive rate (per scan) problem. We also provide, for the first time, analysis on nodule size and position which can help improve CAD algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, S; Kaufman, R
Purpose: To analyze CT radiation dosimetry trends in a pediatric population imaged with modern (2004-2013) CT technology Methods: The institutional review board approved this retrospective review. Two cohorts of pediatric patients that received CT scans for treatment or surveillance for Wilms tumor (n=73) or Neuroblastoma (n=74) from 2004–2013 were included in this study. Patients were scanned during this time period on a GE Ultra (8 slice; 2004–2007), a GE VCT (2008–2011), or a GE VCT-XTe (2011–2013). Each patient's individual or combined chest, abdomen, and pelvic CT exams (n=4138) were loaded onto a PACS workstation (Intelerad, Canada) and measured to calculatemore » their effective diameter and SSDE. Patient SSDE was used to estimate patient organ dosimetry based on previously published data. Patient's organ dosimetry were sorted by gender, weight, age, scan protocol (i.e., chest, abdomen, or pelvis), and CT scanner technology and averaged accordingly to calculate population averaged absolute and effective dose values. Results: Patient radiation dose burden calculated for all genders, weights, and ages decreased at a rate of 0.2 mSv/year (4.2 mGy/year; average organ dose) from 2004–2013; overall levels decreased by 50% from 3.0 mSv (60.0 mGy) to 1.5 mSv (25.9 mGy). Patient dose decreased at equal rates for both male and female, and for individual scan protocols. The greatest dose savings was found for patients between 0–4 years old (65%) followed by 5-9 years old (45%), 10–14 years old (30%), and > 14 years old (21%). Conclusion: Assuming a linear-nothreshold model, there always will be potential risk of cancer induction from CT. However, as demonstrated among these patient populations, effective and organ dose has decreased over the last decade; thus, potential risk of long-term side effects from pediatric CT examinations has also been reduced.« less
CBF measured by Xe-CT: Approach to analysis and normal values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonas, H.; Darby, J.M.; Marks, E.C.
1991-09-01
Normal reference values and a practical approach to CBF analysis are needed for routine clinical analysis and interpretation of xenon-enhanced computed tomography (CT) CBF studies. The authors measured CBF in 67 normal individuals with the GE 9800 CT scanner adapted for CBF imaging with stable Xe. CBF values for vascular territories were systematically analyzed using the clustering of contiguous 2-cm circular regions of interest (ROIs) placed within the cortical mantle and basal ganglia. Mixed cortical flows averaged 51 {plus minus} 10ml.100g-1.min-1. High and low flow compartments, sampled by placing 5-mm circular ROIs in regions containing the highest and lowest flowmore » values in each hemisphere, averaged 84 {plus minus} 14 and 20 {plus minus} 5 ml.100 g-1.min-1, respectively. Mixed cortical flow values as well as values within the high flow compartment demonstrated significant decline with age; however, there were no significant age-related changes in the low flow compartment. The clustering of systematically placed cortical and subcortical ROIs has provided a normative data base for Xe-CT CBF and a flexible and uncomplicated method for the analysis of CBF maps generated by Xe-enhanced CT.« less
NASA Astrophysics Data System (ADS)
Zhang, Ruoqiao; Alessio, Adam M.; Pierce, Larry A.; Byrd, Darrin W.; Lee, Tzu-Cheng; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Due to the wide variability of intra-patient respiratory motion patterns, traditional short-duration cine CT used in respiratory gated PET/CT may be insufficient to match the PET scan data, resulting in suboptimal attenuation correction that eventually compromises the PET quantitative accuracy. Thus, extending the duration of cine CT can be beneficial to address this data mismatch issue. In this work, we propose to use a long-duration cine CT for respiratory gated PET/CT, whose cine acquisition time is ten times longer than a traditional short-duration cine CT. We compare the proposed long-duration cine CT with the traditional short-duration cine CT through numerous phantom simulations with 11 respiratory traces measured during patient PET/CT scans. Experimental results show that, the long-duration cine CT reduces the motion mismatch between PET and CT by 41% and improves the overall reconstruction accuracy by 42% on average, as compared to the traditional short-duration cine CT. The long-duration cine CT also reduces artifacts in PET images caused by misalignment and mismatch between adjacent slices in phase-gated CT images. The improvement in motion matching between PET and CT by extending the cine duration depends on the patient, with potentially greater benefits for patients with irregular breathing patterns or larger diaphragm movements.
A new tissue segmentation method to calculate 3D dose in small animal radiation therapy.
Noblet, C; Delpon, G; Supiot, S; Potiron, V; Paris, F; Chiavassa, S
2018-02-26
In pre-clinical animal experiments, radiation delivery is usually delivered with kV photon beams, in contrast to the MV beams used in clinical irradiation, because of the small size of the animals. At this medium energy range, however, the contribution of the photoelectric effect to absorbed dose is significant. Accurate dose calculation therefore requires a more detailed tissue definition because both density (ρ) and elemental composition (Z eff ) affect the dose distribution. Moreover, when applied to cone beam CT (CBCT) acquisitions, the stoichiometric calibration of HU becomes inefficient as it is designed for highly collimated fan beam CT acquisitions. In this study, we propose an automatic tissue segmentation method of CBCT imaging that assigns both density (ρ) and elemental composition (Z eff ) in small animal dose calculation. The method is based on the relationship found between CBCT number and ρ*Z eff product computed from known materials. Monte Carlo calculations were performed to evaluate the impact of ρZ eff variation on the absorbed dose in tissues. These results led to the creation of a tissue database composed of artificial tissues interpolated from tissue values published by the ICRU. The ρZ eff method was validated by measuring transmitted doses through tissue substitute cylinders and a mouse with EBT3 film. Measurements were compared to the results of the Monte Carlo calculations. The study of the impact of ρZ eff variation over the range of materials, from ρZ eff = 2 g.cm - 3 (lung) to 27 g.cm - 3 (cortical bone) led to the creation of 125 artificial tissues. For tissue substitute cylinders, the use of ρZ eff method led to maximal and average relative differences between the Monte Carlo results and the EBT3 measurements of 3.6% and 1.6%. Equivalent comparison for the mouse gave maximal and average relative differences of 4.4% and 1.2%, inside the 80% isodose area. Gamma analysis led to a 94.9% success rate in the 10% isodose area with 4% and 0.3 mm criteria in dose and distance. Our new tissue segmentation method was developed for 40kVp CBCT images. Both density and elemental composition are assigned to each voxel by using a relationship between HU and the product ρZ eff . The method, validated by comparing measurements and calculations, enables more accurate small animal dose distribution calculated on low energy CBCT images.
Fusion of sensor geometry into additive strain fields measured with sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Sadoughi, Mohammadkazem; Laflamme, Simon; Hu, Chao
2018-07-01
Recently, numerous studies have been conducted on flexible skin-like membranes for the cost effective monitoring of large-scale structures. The authors have proposed a large-area electronic consisting of a soft elastomeric capacitor (SEC) that transduces a structure’s strain into a measurable change in capacitance. Arranged in a network configuration, SECs deployed onto the surface of a structure could be used to reconstruct strain maps. Several regression methods have been recently developed with the purpose of reconstructing such maps, but all these algorithms assumed that each SEC-measured strain located at its geometric center. This assumption may not be realistic since an SEC measures the average strain value of the whole area covered by the sensor. One solution is to reduce the size of each SEC, but this would also increase the number of required sensors needed to cover the large-scale structure, therefore increasing the need for the power and data acquisition capabilities. Instead, this study proposes an algorithm that accounts for the sensor’s strain averaging feature by adjusting the strain measurements and constructing a full-field strain map using the kriging interpolation method. The proposed algorithm fuses the geometry of an SEC sensor into the strain map reconstruction in order to adaptively adjust the average kriging-estimated strain of the area monitored by the sensor to the signal. Results show that by considering the sensor geometry, in addition to the sensor signal and location, the proposed strain map adjustment algorithm is capable of producing more accurate full-field strain maps than the traditional spatial interpolation method that considered only signal and location.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
Automated algorithm for mapping regions of cold-air pooling in complex terrain
NASA Astrophysics Data System (ADS)
Lundquist, Jessica D.; Pepin, Nicholas; Rochford, Caitlin
2008-11-01
In complex terrain, air in contact with the ground becomes cooled from radiative energy loss on a calm clear night and, being denser than the free atmosphere at the same elevation, sinks to valley bottoms. Cold-air pooling (CAP) occurs where this cooled air collects on the landscape. This article focuses on identifying locations on a landscape subject to considerably lower minimum temperatures than the regional average during conditions of clear skies and weak synoptic-scale winds, providing a simple automated method to map locations where cold air is likely to pool. Digital elevation models of regions of complex terrain were used to derive surfaces of local slope, curvature, and percentile elevation relative to surrounding terrain. Each pixel was classified as prone to CAP, not prone to CAP, or exhibiting no signal, based on the criterion that CAP occurs in regions with flat slopes in local depressions or valleys (negative curvature and low percentile). Along-valley changes in the topographic amplification factor (TAF) were then calculated to determine whether the cold air in the valley was likely to drain or pool. Results were checked against distributed temperature measurements in Loch Vale, Rocky Mountain National Park, Colorado; in the Eastern Pyrenees, France; and in Yosemite National Park, Sierra Nevada, California. Using CAP classification to interpolate temperatures across complex terrain resulted in improvements in root-mean-square errors compared to more basic interpolation techniques at most sites within the three areas examined, with average error reductions of up to 3°C at individual sites and about 1°C averaged over all sites in the study areas.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
Comparison of air-kerma strength determinations for HDR (192)Ir sources.
Rasmussen, Brian E; Davis, Stephen D; Schmidt, Cal R; Micka, John A; Dewerd, Larry A
2011-12-01
To perform a comparison of the interim air-kerma strength standard for high dose rate (HDR) (192)Ir brachytherapy sources maintained by the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) with measurements of the various source models using modified techniques from the literature. The current interim standard was established by Goetsch et al. in 1991 and has remained unchanged to date. The improved, laser-aligned seven-distance apparatus of the University of Wisconsin Medical Radiation Research Center (UWMRRC) was used to perform air-kerma strength measurements of five different HDR (192)Ir source models. The results of these measurements were compared with those from well chambers traceable to the original standard. Alternative methodologies for interpolating the (192)Ir air-kerma calibration coefficient from the NIST air-kerma standards at (137)Cs and 250 kVp x rays (M250) were investigated and intercompared. As part of the interpolation method comparison, the Monte Carlo code EGSnrc was used to calculate updated values of A(wall) for the Exradin A3 chamber used for air-kerma strength measurements. The effects of air attenuation and scatter, room scatter, as well as the solution method were investigated in detail. The average measurements when using the inverse N(K) interpolation method for the Classic Nucletron, Nucletron microSelectron, VariSource VS2000, GammaMed Plus, and Flexisource were found to be 0.47%, -0.10%, -1.13%, -0.20%, and 0.89% different than the existing standard, respectively. A further investigation of the differences observed between the sources was performed using MCNP5 Monte Carlo simulations of each source model inside a full model of an HDR 1000 Plus well chamber. Although the differences between the source models were found to be statistically significant, the equally weighted average difference between the seven-distance measurements and the well chambers was 0.01%, confirming that it is not necessary to update the current standard maintained at the UWADCL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, M., E-mail: kuhnm@mit.edu; Hashimoto, S.; Sato, K.
The oxygen nonstoichiometry of La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}} has been the topic of various reports in the literature, but has been exclusively measured at high oxygen partial pressures, pO{sub 2}, and/or elevated temperatures. For applications of La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}}, such as solid oxide fuel cell cathodes or oxygen permeation membranes, knowledge of the oxygen nonstoichiometry and thermo-chemical stability over a wide range of pO{sub 2} is crucial, as localized low pO{sub 2} could trigger failure of the material and device. By employing coulometric titration combined with thermogravimetry, the oxygen nonstoichiometry of La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}} was measured at highmore » and intermediate pO{sub 2} until the material decomposed (at log(pO{sub 2}/bar) Almost-Equal-To -4.5 at 1073 K). For a gradually reduced sample, an offset in oxygen content suggests that La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}} forms a 'super-reduced' solid solution before decomposing. When the sample underwent alternate reduction-oxidation, a hysteresis-like pO{sub 2} dependence of the oxygen content in the decomposition pO{sub 2} range was attributed to the reversible formation of ABO{sub 3} and A{sub 2}BO{sub 4} phases. Reduction enthalpy and entropy were determined for the single-phase region and confirmed interpolated values from the literature. - Graphical abstract: Oxygen nonstoichiometry (shown as 3-{delta}) of La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}} as a function of pO{sub 2} at 773-1173 K. The experimental data were obtained by thermogravimetric analysis (TG) and coulometric titration (measured either by a simple reduction (CT1) or a 'two-step-forward one-step-back' reduction-oxidation (CT2) procedure). D1 and D2 denote the decomposition pO{sub 2}. The solid lines are the fit to the thermogravimetry and CT1 data. The dashed lines represent the non-equilibrium region where the sample shows a super-reduced state. Highlights: Black-Right-Pointing-Pointer Oxygen nonstoichiometry of La{sub 0.6}Sr{sub 0.4}CoO{sub 3-{delta}} at intermediate temperatures and p(O2). Black-Right-Pointing-Pointer Experimental confirmation of previously interpolated reduction enthalpy. Black-Right-Pointing-Pointer Decomposition p(O2) assessed by coulometric titration. Black-Right-Pointing-Pointer Hysteresis-like p(O2) dependence of oxygen content at decomposition p(O2).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schadewaldt, N; Schulz, H; Helle, M
2014-06-01
Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water andmore » fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a subsidiary of Royal Philips NV.« less
Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014
NASA Astrophysics Data System (ADS)
Junod, R.; Christy, J. R.
2016-12-01
Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
Intrafractional gastric motion and interfractional stomach deformity using CT images.
Watanabe, Miho; Isobe, Koichi; Uno, Takashi; Harada, Rintarou; Kobayashi, Hiroyuki; Ueno, Naoyuki; Ito, Hisao
2011-01-01
To evaluate the intra- and interfractional gastric motion using repeated CT scans, six consecutive patients with gastric lymphoma treated at our institution between 2006 and 2008 were included in this study. We performed a simulation and delivered RT before lunch after an overnight fast to minimize the stomach volume. These patients underwent repeated CT scanning at mild inhale and exhale before their course of treatment. The repeated CT scans were matched on bony anatomy to the planning scan. The center of stomach was determined in the X (lateral), Y (superior-inferior), and Z (ventro-dorsal) coordinate system to evaluate the intra- and interfractional motion of the stomach on each CT scan. We then calculated the treatment margins. Each patient was evaluated four to five times before their course of RT. The average intrafractional motions were -12.1, 2.4 and 4.6 mm for the superior-inferior (SI), lateral (LAT), and ventro-dorsal (VD) direction. The average interfractional motions of the center of the stomach were -4.1, 1.9 and 1.5 mm for the SI, LAT and VD direction. The average of the vector length was 13.0 mm. The systematic and random errors in SI direction were 5.1, and 4.6 mm, respectively. The corresponding figures in LAT and VD directions were 10.9, 5.4, 10.0, and 6.5 mm, respectively. Thus, the 15.9, 31.0 and 29.6 mm of margins are required for the SI, LAT, and VD directions, respectively. We have demonstrated not only intrafractional stomach motion, but also interfractional motion is considerable.
Computerised Axial Tomography (CAT)
1990-06-01
commercial market. EMI, which had originally counted on being the only firm selling CT units , subsequently increased its production in order to overtake...to a rough estimate’"’ the number of CT scanners at present working in Italy is about 18 units installed. apart from those in the large cities such as...hGdcl scanners and 198 total body scanners): among othar things, they emphasise that a CT unit , works, on an average, for 5.4 days in the week and
Werner, Matthias K; Parker, J Anthony; Kolodny, Gerald M; English, Jeffrey R; Palmer, Matthew R
2009-12-01
The aim of this study was to evaluate prospectively the effects of respiratory gating during FDG PET/CT on the determination of lesion size and the measurement of tracer uptake in patients with pulmonary nodules in a clinical setting. Eighteen patients with known pulmonary nodules (nine women, nine men; mean age, 61.4 years) underwent conventional FDG PET/CT and respiratory-gated PET acquisitions during their scheduled staging examinations. Maximum, minimum, and average standardized uptake values (SUVs) and lesion size and volume were determined with and without respiratory gating. The results were then compared using the two-tailed Student's t test and the nonparametric Wilcoxon's test to assess the effects of respiratory gating on PET acquisitions. Respiratory gating reduced the measured area of lung lesions by 15.5%, the axial dimension by 10.3%, and the volume by 44.5% (p = 0.014, p = 0.007, and p = 0.025, respectively). The lesion volumes in gated studies were closer to those assessed by standard CT (difference decreased by 126.6%, p = 0.025). Respiratory gating increased the measured maximum SUV by 22.4% and average SUV by 13.3% (p < 0.001 and p = 0.002). Our findings suggest that the use of PET respiratory gating in PET/CT results in lesion volumes closer to those assessed by CT and improved measurements of tracer uptake for lesions in the lungs.
Research progress and hotspot analysis of spatial interpolation
NASA Astrophysics Data System (ADS)
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
The abundances of hydrogen, helium, oxygen, and iron accelerated in large solar particle events
NASA Technical Reports Server (NTRS)
Mazur, J. E.; Mason, G. M.; Klecker, B.; Mcguire, R. E.
1993-01-01
Energy spectra measured in 10 large flares with the University of Maryland/Max-Planck-Institut sensors on ISEE I and Goddard Space Flight Center sensors on IMP 8 allowed us to determine the average H, He, O, and Fe abundances as functions of energy in the range of about 0.3-80 MeV/nucleon. Model fits to the spectra of individual events using the predictions of a steady state stochastic acceleration model with rigidity-dependent diffusion provided a means of interpolating small portions of the energy spectra not measured with the instrumentation. Particles with larger mass-to-charge ratios were relatively less abundant at higher energies in the flare-averaged composition. The Fe/O enhancement at low SEP energies was less than the Fe/O ratios observed in He-3-rich flares. Unlike the SEP composition averaged above 5 MeV/nucleon, the average SEP abundances above 0.3 MeV/nucleon were similar to the average solar wind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Leszczynski, K; Lee, Y
Purpose: To evaluate MR-only treatment planning for brain Stereotactic Ablative Radiotherapy (SABR) based on pseudo-CT (pCT) generation using one set of T1-weighted MRI. Methods: T1-weighted MR and CT images from 12 patients who were eligible for brain SABR were retrospectively acquired for this study. MR-based pCT was generated by using a newly in-house developed algorithm based on MR tissue segmentation and voxel-based electron density (ED) assignment (pCTv). pCTs using bulk density assignment (pCTb where bone and soft tissue were assigned 800HU and 0HU,respectively), and water density assignment (pCTw where all tissues were assigned 0HU) were generated for comparison of EDmore » assignment techniques. The pCTs were registered with CTs and contours of radiation targets and Organs-at-Risk (OARs) from clinical CT-based plans were copied to co-registered pCTs. Volumetric-Modulated-Arc-Therapy(VMAT) plans were independently created for pCTv and CT using the same optimization settings and a prescription (50Gy/10 fractions) to planning-target-volume (PTV) mean dose. pCTv-based plans and CT-based plans were compared with dosimetry parameters and monitor units (MUs). Beam fluence maps of CT-based plans were transferred to co-registered pCTs, and dose was recalculated on pCTs. Dose distribution agreement between pCTs and CT plans were quantified using Gamma analysis (2%/2mm, 1%/1mm with a 10% cut-off threshold) in axial, coronal and sagittal planes across PTV. Results: The average differences of PTV mean and maximum doses, and monitor units between independently created pCTv-based and CT-based plans were 0.5%, 1.5% and 1.1%, respectively. Gamma analysis of dose distributions of the pCTs and the CT calculated using the same fluence map resulted in average agreements of 92.6%/79.1%/52.6% with 1%/1mm criterion, and 98.7%/97.4%/71.5% with 2%/2mm criterion, for pCTv/CT, pCTb/CT and pCTw/CT, respectively. Conclusion: Plans produced on Voxel-based pCT is dosimetrically more similar to CT plans than bulk assignment-based pCTs. MR-only treatment planning using voxel-based pCT generated from T1-wieghted MRI may be feasible.« less
Pediatric Chest and Abdominopelvic CT: Organ Dose Estimation Based on 42 Patient Models
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Paulson, Erik K.; Frush, Donald P.
2014-01-01
Purpose To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. Materials and Methods The institutional review board approved this HIPAA–compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0–16 years; weight range, 2–80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDIvol). The relationships between CTDIvol-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. Results For organs within the image coverage, CTDIvol-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R2 > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%–32%) mainly because of the effect of overranging. Conclusion It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDIvol. These CTDIvol-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013 PMID:24126364
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
Wiesmüller, Marco; Quick, Harald H; Navalpakkam, Bharath; Lell, Michael M; Uder, Michael; Ritt, Philipp; Schmidt, Daniela; Beck, Michael; Kuwert, Torsten; von Gall, Carl C
2013-01-01
PET/MR hybrid scanners have recently been introduced, but not yet validated. The aim of this study was to compare the PET components of a PET/CT hybrid system and of a simultaneous whole-body PET/MR hybrid system with regard to reproducibility of lesion detection and quantitation of tracer uptake. A total of 46 patients underwent a whole-body PET/CT scan 1 h after injection and an average of 88 min later a second scan using a hybrid PET/MR system. The radioactive tracers used were (18)F-deoxyglucose (FDG), (18)F-ethylcholine (FEC) and (68)Ga-DOTATATE (Ga-DOTATATE). The PET images from PET/CT (PET(CT)) and from PET/MR (PET(MR)) were analysed for tracer-positive lesions. Regional tracer uptake in these foci was quantified using volumes of interest, and maximal and average standardized uptake values (SUV(max) and SUV(avg), respectively) were calculated. Of the 46 patients, 43 were eligible for comparison and statistical analysis. All lesions except one identified by PET(CT) were identified by PET(MR) (99.2 %). In 38 patients (88.4 %), the same number of foci were identified by PET(CT) and by PET(MR). In four patients, more lesions were identified by PET(MR) than by PET(CT), in one patient PET(CT) revealed an additional focus compared to PET(MR). The mean SUV(max) and SUV(avg) of all lesions determined by PET(MR) were by 21 % and 11 % lower, respectively, than the values determined by PET(CT) (p < 0.05), and a strong correlation between these variables was identified (Spearman rho 0.835; p < 0.01). PET/MR showed equivalent performance in terms of qualitative lesion detection to PET/CT. The differences demonstrated in quantitation of tracer uptake between PET(CT) and PET(MR) were minor, but statistically significant. Nevertheless, a more detailed study of the quantitative accuracy of PET(MR) and the factors governing it is needed to ultimately assess its accuracy in measuring tissue tracer concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Lee, Y; Ruschin, M
2015-06-15
Purpose: Automatically derive electron density of tissues using MR images and generate a pseudo-CT for MR-only treatment planning of brain tumours. Methods: 20 stereotactic radiosurgery (SRS) patients’ T1-weighted MR images and CT images were retrospectively acquired. First, a semi-automated tissue segmentation algorithm was developed to differentiate tissues with similar MR intensities and large differences in electron densities. The method started with approximately 12 slices of manually contoured spatial regions containing sinuses and airways, then air, bone, brain, cerebrospinal fluid (CSF) and eyes were automatically segmented using edge detection and anatomical information including location, shape, tissue uniformity and relative intensity distribution.more » Next, soft tissues - muscle and fat were segmented based on their relative intensity histogram. Finally, intensities of voxels in each segmented tissue were mapped into their electron density range to generate pseudo-CT by linearly fitting their relative intensity histograms. Co-registered CT was used as a ground truth. The bone segmentations of pseudo-CT were compared with those of co-registered CT obtained by using a 300HU threshold. The average distances between voxels on external edges of the skull of pseudo-CT and CT in three axial, coronal and sagittal slices with the largest width of skull were calculated. The mean absolute electron density (in Hounsfield unit) difference of voxels in each segmented tissues was calculated. Results: The average of distances between voxels on external skull from pseudo-CT and CT were 0.6±1.1mm (mean±1SD). The mean absolute electron density differences for bone, brain, CSF, muscle and fat are 78±114 HU, and 21±8 HU, 14±29 HU, 57±37 HU, and 31±63 HU, respectively. Conclusion: The semi-automated MR electron density mapping technique was developed using T1-weighted MR images. The generated pseudo-CT is comparable to that of CT in terms of anatomical position of tissues and similarity of electron density assignment. This method can allow MR-only treatment planning.« less
A 4D global respiratory motion model of the thorax based on CT images: A proof of concept.
Fayad, Hadi; Gilles, Marlene; Pan, Tinsu; Visvikis, Dimitris
2018-05-17
Respiratory motion reduces the sensitivity and specificity of medical images especially in the thoracic and abdominal areas. It may affect applications such as cancer diagnostic imaging and/or radiation therapy (RT). Solutions to this issue include modeling of the respiratory motion in order to optimize both diagnostic and therapeutic protocols. Personalized motion modeling required patient-specific four-dimensional (4D) imaging which in the case of 4D computed tomography (4D CT) acquisition is associated with an increased dose. The goal of this work was to develop a global respiratory motion model capable of relating external patient surface motion to internal structure motion without the need for a patient-specific 4D CT acquisition. The proposed global model is based on principal component analysis and can be adjusted to a given patient anatomy using only one or two static CT images in conjunction with a respiratory synchronized patient external surface motion. It is based on the relation between the internal motion described using deformation fields obtained by registering 4D CT images and patient surface maps obtained either from optical imaging devices or extracted from CT image-based patient skin segmentation. 4D CT images of six patients were used to generate the global motion model which was validated by adapting it on four different patients having skin segmented surfaces and two other patients having time of flight camera acquired surfaces. The reproducibility of the proposed model was also assessed on two patients with two 4D CT series acquired within 2 weeks of each other. Profile comparison shows the efficacy of the global respiratory motion model and an improvement while using two CT images in order to adapt the model. This was confirmed by the correlation coefficient with a mean correlation of 0.9 and 0.95 while using one or two CT images respectively and when comparing acquired to model generated 4D CT images. For the four patients with segmented surfaces, expert validation indicates an error of 2.35 ± 0.26 mm compared to 6.07 ± 0.76 mm when using a simple interpolation between full inspiration (FI) and full expiration (FE) CT only; i.e., without specific modeling of the respiratory motion. For the two patients with acquired surfaces, this error was of 2.48 ± 0.18 mm. In terms of reproducibility, model error changes of 0.12 and 0.17 mm were measured for the two patients concerned. The framework for the derivation of a global respiratory motion model was developed. A single or two static CT images and associated patient surface motion, as a surrogate measure, are only needed to personalize the model. This model accuracy and reproducibility were assessed by comparing acquired vs model generated 4D CT images. Future work will consist of assessing extensively the proposed model for radiotherapy applications. © 2018 American Association of Physicists in Medicine.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
Price, Jeff
1995-01-01
These maps show changes in the distribution and abundance patterns of some North American birds for the last 20 years. For each species there are four maps, each representing the average distribution and abundance pattern over the five-year periods 1970-1974, 1975-1979, 1980-1984, and 1985-1989. The maps are based on data collected by the USFWS/CWS Breeding Bird Survey (BBS). Only BBS routes that were run at least once during each of the five-year periods were used (about 1300 routes). The maps were created in the software package Surfer using a kriging technique to interpolate mean relative abundances for areas where no routes were run. On each map, a portion of northeast Canada was blanked out because there were not enough routes to allow for adequate interpolation. All of the maps in this presentation use the same color scale (shown below). The minimum value mapped was 0.5 birds per route, which represents the edge of the species range.
Cyr, Marilyn; Kopala-Sibley, Daniel C; Lee, Seonjoo; Chen, Chen; Stefan, Mihaela; Fontaine, Martine; Terranova, Kate; Berner, Laura A; Marsh, Rachel
2017-10-01
Cross-sectional data suggest functional and anatomical disturbances in inferior and orbital frontal regions in bulimia nervosa (BN). Using longitudinal data, we investigated whether reduced cortical thickness (CT) in these regions arises early and persists over adolescence in BN, independent of symptom remission, and whether CT reductions are markers of BN symptoms. A total of 33 adolescent females with BN symptoms (BN or other specified feeding or eating disorder) and 28 healthy adolescents participated in this study. Anatomical magnetic resonance imaging and clinical data were acquired at 3 time points within 2-year intervals over adolescence, with 31% average attrition between assessments. Using a region-of-interest approach, we assessed group differences in CT at baseline and over time, and tested whether between- and within-subject variations in CT were associated with the frequency of BN symptoms. Reduced CT in the right inferior frontal gyrus persisted over adolescence in BN compared to healthy adolescents, even in those who achieved full or partial remission. Within the BN group, between-subject variations in CT in the inferior and orbital frontal regions were inversely associated with specific BN symptoms, suggesting, on average over time, greater CT reductions in individuals with more frequent BN symptoms. Reduced CT in inferior frontal regions may contribute to illness persistence into adulthood. Reductions in the thickness of the inferior and orbital frontal regions may be markers of specific BN symptoms. Because our sample size precluded correcting for multiple comparisons, these findings should be replicated in a larger sample. Future study of functional changes in associated fronto-striatal circuits could identify potential circuit-based intervention targets. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Radiation Dose Reduction by Indication-Directed Focused z-Direction Coverage for Neck CT.
Parikh, A K; Shah, C C
2016-06-01
The American College of Radiology-American Society of Neuroradiology-Society for Pediatric Radiology Practice Parameter for a neck CT suggests that coverage should be from the sella to the aortic arch. It also recommends using CT scans judiciously to achieve the clinical objective. Our purpose was to analyze the potential dose reduction by decreasing the scan length of a neck CT and to assess for any clinically relevant information that might be missed from this modified approach. This retrospective study included 126 children who underwent a neck CT between August 1, 2013, and September 30, 2014. Alteration of the scan length for the modified CT was suggested on the topographic image on the basis of the indication of the study, with the reader blinded to the images and the report. The CT dose index volume of the original scan was multiplied by the new scan length to calculate the dose-length product of the modified study. The effective dose was calculated for the original and modified studies by using age-based conversion factors from the American Association of Physicists in Medicine Report No. 96. Decreasing the scan length resulted in an average estimated dose reduction of 47%. The average reduction in scan length was 10.4 cm, decreasing the overall coverage by 48%. The change in scan length did not result in any missed findings that altered management. Of the 27 abscesses in this study, none extended to the mediastinum. All of the lesions in question were completely covered. Decreasing the scan length of a neck CT according to the indication provides a significant savings in radiation dose, while not altering diagnostic ability or management. © 2016 by American Journal of Neuroradiology.
Brodin, N P; Björk-Eriksson, T; Birk Christensen, C; Kiil-Berthelsen, A; Aznar, M C; Hollensen, C; Markova, E; Munck af Rosenschöld, P
2015-01-01
Objective: To investigate the impact of including fluorine-18 fludeoxyglucose (18F-FDG) positron emission tomography (PET) scanning in the planning of paediatric radiotherapy (RT). Methods: Target volumes were first delineated without and subsequently re-delineated with access to 18F-FDG PET scan information, on duplicate CT sets. RT plans were generated for three-dimensional conformal photon RT (3DCRT) and intensity-modulated proton therapy (IMPT). The results were evaluated by comparison of target volumes, target dose coverage parameters, normal tissue complication probability (NTCP) and estimated risk of secondary cancer (SC). Results: Considerable deviations between CT- and PET/CT-guided target volumes were seen in 3 out of the 11 patients studied. However, averaging over the whole cohort, CT or PET/CT guidance introduced no significant difference in the shape or size of the target volumes, target dose coverage, irradiated volumes, estimated NTCP or SC risk, neither for IMPT nor 3DCRT. Conclusion: Our results imply that the inclusion of PET/CT scans in the RT planning process could have considerable impact for individual patients. There were no general trends of increasing or decreasing irradiated volumes, suggesting that the long-term morbidity of RT in childhood would on average remain largely unaffected. Advances in knowledge: 18F-FDG PET-based RT planning does not systematically change NTCP or SC risk for paediatric cancer patients compared with CT only. 3 out of 11 patients had a distinct change of target volumes when PET-guided planning was introduced. Dice and mismatch metrics are not sufficient to assess the consequences of target volume differences in the context of RT. PMID:25494657
Treglia, Giorgio; Taralli, Silvia; Salsano, Marco; Muoio, Barbara; Sadeghi, Ramin; Giovanella, Luca
2014-06-01
The aim of the study was to meta-analyze published data about prevalence and malignancy risk of focal colorectal incidentalomas (FCIs) detected by Fluorine-18-Fluorodeoxyglucose positron emission tomography or positron emission tomography/computed tomography ((18)F-FDG-PET or PET/CT). A comprehensive computer literature search of studies published through July 31(st) 2012 regarding FCIs detected by (18)F-FDG-PET or PET/CT was performed. Pooled prevalence of patients with FCIs and risk of malignant or premalignant FCIs after colonoscopy or histopathology verification were calculated. Furthermore, separate calculations for geographic areas were performed. Finally, average standardized uptake values (SUV) in malignant, premalignant and benign FCIs were reported. Thirty-two studies comprising 89,061 patients evaluated by (18)F-FDG-PET or PET/CT were included. The pooled prevalence of FCIs detected by (18)F-FDG-PET or PET/CT was 3.6% (95% confidence interval [95% CI]: 2.6-4.7%). Overall, 1,044 FCIs detected by (18)F-FDG-PET or PET/CT underwent colonoscopy or histopathology evaluation. Pooled risk of malignant or premalignant lesions was 68% (95% CI: 60-75%). Risk of malignant and premalignant FCIs in Asia-Oceania was lower compared to that of Europe and America. A significant overlap in average SUV was found between malignant, premalignant and benign FCIs. FCIs are observed in a not negligible number of patients who undergo (18)F-FDG-PET or PET/CT studies with a high risk of malignant or premalignant lesions. SUV is not reliable as a tool to differentiate between malignant, premalignant and benign FCIs. Further investigation is warranted whenever FCIs are detected by (18)F-FDG-PET or PET/CT.
Treglia, Giorgio; Taralli, Silvia; Salsano, Marco; Muoio, Barbara; Sadeghi, Ramin; Giovanella, Luca
2014-01-01
Background The aim of the study was to meta-analyze published data about prevalence and malignancy risk of focal colorectal incidentalomas (FCIs) detected by Fluorine-18-Fluorodeoxyglucose positron emission tomography or positron emission tomography/computed tomography (18F-FDG-PET or PET/CT). Methods A comprehensive computer literature search of studies published through July 31st 2012 regarding FCIs detected by 18F-FDG-PET or PET/CT was performed. Pooled prevalence of patients with FCIs and risk of malignant or premalignant FCIs after colonoscopy or histopathology verification were calculated. Furthermore, separate calculations for geographic areas were performed. Finally, average standardized uptake values (SUV) in malignant, premalignant and benign FCIs were reported. Results Thirty-two studies comprising 89,061 patients evaluated by 18F-FDG-PET or PET/CT were included. The pooled prevalence of FCIs detected by 18F-FDG-PET or PET/CT was 3.6% (95% confidence interval [95% CI]: 2.6–4.7%). Overall, 1,044 FCIs detected by 18F-FDG-PET or PET/CT underwent colonoscopy or histopathology evaluation. Pooled risk of malignant or premalignant lesions was 68% (95% CI: 60–75%). Risk of malignant and premalignant FCIs in Asia-Oceania was lower compared to that of Europe and America. A significant overlap in average SUV was found between malignant, premalignant and benign FCIs. Conclusions FCIs are observed in a not negligible number of patients who undergo 18F-FDG-PET or PET/CT studies with a high risk of malignant or premalignant lesions. SUV is not reliable as a tool to differentiate between malignant, premalignant and benign FCIs. Further investigation is warranted whenever FCIs are detected by 18F-FDG-PET or PET/CT. PMID:24991198
The Effect of Herbaceous Legume of Feed in In-Vitro Digestibility
NASA Astrophysics Data System (ADS)
Ratnawaty, S.; Hartutik; Chuzaemi, S.
2018-02-01
This study was carried out to evaluate in-vitro digestibility of herbal legumesin feed. The materials used in this study were three types of herbal legumes namely Clitoria ternatea Q5455 (CT Q5455), Clitoria ternatea cv. Milgarra (CT cv Milgarra) and Stylosanthes seabrana (S. seabrana), The treatments were P0 = 100% Grass; P1 = 50% Grass+ 50% CT Q5455, P2 = 50% Grass + 50% CT cv. Milgarra, P3 = 50% Grass + 50% S. seabrana. The result showed that the treatments had a significant effect (P <0.05). The highest dry matter (DM) digestibilitywas in P1 (60.35%) and P3 (60.22%). The DM digestibility of the highest raw materials was in CT cv. Milgarra (73.49%) and the lowest one was in S. seabrana (63.90%). The treatments had a very significant effect (P <0.01) on the organic matter (OM) Digestibility. The highest OM digestibility wasin P1 (63.04%) and P3 (61.89%). The highest value of OM digestibility of raw materials was in CT cv. Milgarra (73.90%) and the lowest one was in S. seabrana (63.85%). The treatmentshad a significant effect (P <0.05) on the crude protein (CP) digestibility. The average CP digestibility of feed was at the same value in all treatments but in CT Q5455 (67.25%). The treatmentshad a significant effect (P <0.05) on the total digestible nutrients (TDN). The highest TDN was in P1 (66.19%) and the lowest one was in P0 (51.38%). Average TDN of the highest raw material was in CT cv. Milgarra (77.59%) and the lowest was in S. seabrana (67.04%).
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.
2010-04-01
Radiation-dose awareness and optimization in CT can greatly benefit from a dosereporting system that provides radiation dose and cancer risk estimates specific to each patient and each CT examination. Recently, we reported a method for estimating patientspecific dose from pediatric chest CT. The purpose of this study is to extend that effort to patient-specific risk estimation and to a population of pediatric CT patients. Our study included thirty pediatric CT patients (16 males and 14 females; 0-16 years old), for whom full-body computer models were recently created based on the patients' clinical CT data. Using a validated Monte Carlo program, organ dose received by the thirty patients from a chest scan protocol (LightSpeed VCT, 120 kVp, 1.375 pitch, 40-mm collimation, pediatric body scan field-of-view) was simulated and used to estimate patient-specific effective dose. Risks of cancer incidence were calculated for radiosensitive organs using gender-, age-, and tissue-specific risk coefficients and were used to derive patientspecific effective risk. The thirty patients had normalized effective dose of 3.7-10.4 mSv/100 mAs and normalized effective risk of 0.5-5.8 cases/1000 exposed persons/100 mAs. Normalized lung dose and risk of lung cancer correlated strongly with average chest diameter (correlation coefficient: r = -0.98 to -0.99). Normalized effective risk also correlated strongly with average chest diameter (r = -0.97 to -0.98). These strong correlations can be used to estimate patient-specific dose and risk prior to or after an imaging study to potentially guide healthcare providers in justifying CT examinations and to guide individualized protocol design and optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
Purpose: The purpose of this study was to compare a radiation therapy treatment planning that would spare active bone marrow and whole pelvic bone marrow using 18F FLT PET/CT image. Methods: We have developed an IMRT planning methodology to incorporate functional PET imaging using 18F FLT/CT scans. Plans were generated for two cervical cancer patients, where pelvicactive bone marrow region was incorporated as avoidance regions based on the range: SUV>2., another region was whole pelvic bone marrow. Dose objectives were set to reduce the volume of active bone marrow and whole bone marraw. The volumes of received 10 (V10) andmore » 20 (V20) Gy for active bone marrow were evaluated. Results: Active bone marrow regions identified by 18F FLT with an SUV>2 represented an average of 48.0% of the total osseous pelvis for the two cases studied. Improved dose volume histograms for identified bone marrow SUV volumes and decreases in V10(average 18%), and V20(average 14%) were achieved without clinically significant changes to PTV or OAR doses. Conclusion: Incorporation of 18F FLT/CT PET in IMRT planning provides a methodology to reduce radiation dose to active bone marrow without compromising PTV or OAR dose objectives in cervical cancer.« less
Effect of the precipitation interpolation method on the performance of a snowmelt runoff model
NASA Astrophysics Data System (ADS)
Jacquin, Alexandra
2014-05-01
Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method, HBV parameters are calibrated with respect to Nash-Sutcliffe efficiency. The performance of HBV in the study catchment is not satisfactory. Although volumetric errors are modest, efficiency values are lower than 70%. Discharge estimates resulting from the application of TP, MFF and IDW obtain similar model efficiencies and volumetric errors. These error statistics moderately improve if KED or OIM are used instead. Even though the quality of precipitation estimates of distinct interpolation methods is dissimilar, the results of this study show that these differences do not necessarily produce noticeable changes in HBV's model performance statistics. This situation arises because the calibration of the model parameters allows some degree of compensation of deficient areal precipitation estimates, mainly through the adjustment of model simulated evaporation and glacier melt, as revealed by the analysis of water balances. In general, even if there is a good agreement between model estimated and observed discharge, this information is not sufficient to assert that the internal hydrological processes of the catchment are properly simulated by a watershed model. Other calibration criteria should be incorporated if a more reliable representation of these processes is desired. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279. The HBV Light software used in this study was kindly provided by J. Seibert, Department of Geography, University of Zürich.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Behavior under Uncertainty and Its Implications for Policy.
1983-02-01
remarkably prescient paper of Dupuit, 1884, its theoretical development came much later, after the "marginal revolution" of the 1870’s, and its...be worth the same amount. Frequently, indeed, we extrapolate, or interpolate; if it can be shown that the average individual will pay $1,000 a year ...mostly because of the so-called income effects, a point on which Walras already criti- cized Dupuit. But in this paper , I will not be concerned with
Ng, Yee-Hong; Bettens, Ryan P A
2016-03-03
Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.
Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation
Song, Genxin; Zhang, Jing; Wang, Ke
2014-01-01
In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129
Monotonicity preserving splines using rational cubic Timmer interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Quadratic canonical transformation theory and higher order density matrices.
Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic
2009-03-28
Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.
Jordan, Yusef J; Lightfoote, Johnson B; Jordan, John E
2009-04-01
To evaluate the economic impact and diagnostic utility of computed tomography (CT) in the management of emergency department (ED) patients presenting with headache and nonfocal physical examinations. Computerized medical records from 2 major community hospitals were retrospectively reviewed of patients presenting with headache over a 2.5-year period (2003-2006). A model was developed to assess test outcomes, CT result costs, and average institutional costs of the ED visit. The binomial probabilistic distribution of expected maximum cases was also calculated. Of the 5510 patient records queried, 882 (16%) met the above criteria. Two hundred eighty-one patients demonstrated positive CT findings (31.8%), but only 9 (1.02%) demonstrated clinically significant results (requiring a change in management). Most positive studies were incidental, including old infarcts, chronic ischemic changes, encephalomalacia, and sinusitis. The average cost of the head CT exam and ED visit was $764 (2006 dollars). This was approximately 3 times the cost of a routine outpatient visit (plus CT) for headache ($253). The incremental cost per clinically significant case detected in the ED was $50078. The calculated expected maximum number of clinically significant positive cases was almost 50% lower than what was actually detected. Our results indicate that emergent CT imaging of nonfocal headache yields a low percentage of positive clinically significant results, and has limited cost efficacy. Since the use of CT for imaging patients with headache in the ED is widespread, the economic implications are considerable. Health policy reforms are indicated to better direct utilization in these patients.
NASA Astrophysics Data System (ADS)
Lee, Duhgoon; Nam, Woo Hyun; Lee, Jae Young; Ra, Jong Beom
2011-01-01
In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.
LIP: The Livermore Interpolation Package, Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-07-06
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
LIP: The Livermore Interpolation Package, Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-01-04
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
Evaluation of simethicone-coated cellulose as a negative oral contrast agent for abdominal CT.
Sahani, Dushyant V; Jhaveri, Kartik S; D'souza, Roy V; Varghese, Jose C; Halpern, Elkan; Harisinghani, Mukesh G; Hahn, Peter F; Saini, Sanjay
2003-05-01
Because of the increased clinical use of computed tomography (CT) for imaging the abdominal vasculature and urinary tract, there is a need for negative contrast agents. The authors undertook this study to assess the suitability of simethicone-coated cellulose (SCC), which is approved for use as an oral contrast agent in sonography, for use as a negative oral contrast agent in abdominal CT. This prospective study involved 40 adult patients scheduled to undergo abdominal CT for the evaluation of hematuria. Prior to scanning, 20 subjects received 800 mL of SCC and 20 received 800 mL of water as an oral contrast agent. Imaging was performed with a multi-detector row helical scanner in two phases, according to the abdominal CT protocol used for hematuria evaluation at the authors' institution. The first, "early" phase began an average of 15 minutes after the ingestion of contrast material; the second, "late" phase began an average of 45 minutes after the ingestion of contrast material. Blinded analysis was performed by three abdominal radiologists separately, using a three-point scale (0 = poor, 1 = acceptable, 2 = excellent) to assess the effectiveness of SCC for marking the proximal, middle, and distal small bowel. Average scores for enhancement with SCC and with water were obtained and compared. Statistical analysis was performed with a Wilcoxon signed-rank test. SCC was assigned higher mean scores than water for enhancement in each segment of the bowel, both on early-phase images (0.8-1.35 for SCC vs 0.6-1.1 for water) and on late-phase images (1.1-1.4 vs 0.81-0.96). Bowel marking with SCC, particularly in the jejunum and ileum, also was rated better than that with water in a high percentage of patients. The differences between the scores for water and for SCC, however, were not statistically significant (P > .05). SCC is effective as a negative oral contrast agent for small bowel marking at CT.
Feasibility of Pathology-Correlated Lung Imaging for Accurate Target Definition of Lung Tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroom, Joep; Blaauwgeers, Hans; Baardwijk, Angela van
2007-09-01
Purpose: To accurately define the gross tumor volume (GTV) and clinical target volume (GTV plus microscopic disease spread) for radiotherapy, the pretreatment imaging findings should be correlated with the histopathologic findings. In this pilot study, we investigated the feasibility of pathology-correlated imaging for lung tumors, taking into account lung deformations after surgery. Methods and Materials: High-resolution multislice computed tomography (CT) and positron emission tomography (PET) scans were obtained for 5 patients who had non-small-cell lung cancer (NSCLC) before lobectomy. At the pathologic examination, the involved lung lobes were inflated with formalin, sectioned in parallel slices, and photographed, and microscopic sectionsmore » were obtained. The GTVs were delineated for CT and autocontoured at the 42% PET level, and both were compared with the histopathologic volumes. The CT data were subsequently reformatted in the direction of the macroscopic sections, and the corresponding fiducial points in both images were compared. Hence, the lung deformations were determined to correct the distances of microscopic spread. Results: In 4 of 5 patients, the GTV{sub CT} was, on average, 4 cm{sup 3} ({approx}53%) too large. In contrast, for 1 patient (with lymphangitis carcinomatosa), the GTV{sub CT} was 16 cm{sup 3} ({approx}40%) too small. The GTV{sub PET} was too small for the same patient. Regarding deformations, the volume of the well-inflated lung lobes on pathologic examination was still, on average, only 50% of the lobe volume on CT. Consequently, the observed average maximal distance of microscopic spread (5 mm) might, in vivo, be as large as 9 mm. Conclusions: Our results have shown that pathology-correlated lung imaging is feasible and can be used to improve target definition. Ignoring deformations of the lung might result in underestimation of the microscopic spread.« less
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Anwar, Hoda; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-01-01
A renewed interest has been recently developed for the highly sensitive bone-seeking radiopharmaceutical 18 F-NaF. Aim of the present study is to evaluate the potential utility of quantitative analysis of 18 F-NaF dynamic PET/CT data in differentiating malignant from benign degenerative lesions in multiple myeloma (MM). 80 MM patients underwent whole-body PET/CT and dynamic PET/CT scanning of the pelvis with 18 F-NaF. PET/CT data evaluation was based on visual (qualitative) assessment, semi-quantitative (SUV) calculations, and absolute quantitative estimations after application of a 2-tissue compartment model and a non-compartmental approach leading to the extraction of fractal dimension (FD). In total 263 MM lesions were demonstrated on 18 F-NaF PET/CT. Semi-quantitative and quantitative evaluations were performed for 25 MM lesions as well as for 25 benign, degenerative and traumatic lesions. Mean SUV average for MM lesions was 11.9 and mean SUV max was 23.2. Respectively, SUV average and SUV max for degenerative lesions were 13.5 and 20.2. Kinetic analysis of 18 F-NaF revealed the following mean values for MM lesions: K 1 = 0.248 (1/min), k 3 = 0.359 (1/min), influx (K i ) = 0.107 (1/min), FD = 1.382, while the respective values for degenerative lesions were: K 1 = 0.169 (1/min), k 3 = 0.422 (1/min), influx (K i ) = 0.095 (1/min), FD = 1. 411. No statistically significant differences between MM and benign degenerative disease regarding SUV average , SUV max , K 1 , k 3 and influx (K i ) were demonstrated. FD was significantly higher in degenerative than in malignant lesions. The present findings show that quantitative analysis of 18 F-NaF PET data cannot differentiate malignant from benign degenerative lesions in MM patients, supporting previously published results, which reflect the limited role of 18 F-NaF PET/CT in the diagnostic workup of MM.
Estimation of the weighted CTDI{sub {infinity}} for multislice CT examinations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Xinhua; Zhang Da; Liu, Bob
2012-02-15
Purpose: The aim of this study was to examine the variations of CT dose index (CTDI) efficiencies, {epsilon}(CTDI{sub 100})=CTDI{sub 100}/CTDI{sub {infinity}}, with bowtie filters and CT scanner types. Methods: This was an extension of our previous study [Li, Zhang, and Liu, Phys. Med. Biol. 56, 5789-5803 (2011)]. A validated Monte Carlo program was used to calculate {epsilon}(CTDI{sub 100}) on a Siemens Somatom Definition scanner. The {epsilon}(CTDI{sub 100}) dependencies on tube voltages and beam widths were tested in previous studies. The influences of different bowtie filters and CT scanner types were examined in this work. The authors tested the variations ofmore » {epsilon}(CTDI{sub 100}) with bowtie filters on the Siemens Definition scanner. The authors also analyzed the published CTDI measurements of four independent studies on five scanners of four models from three manufacturers. Results: On the Siemens Definition scanner, the difference in {epsilon}(CTDI{sub W}) between using the head and body bowtie filters was 2.5% (maximum) in the CT scans of the 32-cm phantom, and 1.7% (maximum) in the CT scans of the 16-cm phantom. Compared with CTDI{sub W}, the weighted CTDI{sub {infinity}} increased by 30.5% (on average) in the 32-cm phantom, and by 20.0% (on average) in the 16-cm phantom. These results were approximately the same for 80-140 kV and 1-40 mm beam widths (4.2% maximum deviation). The differences in {epsilon}(CTDI{sub 100}) between the simulations and the direct measurements of four previous studies were 1.3%-5.0% at the center/periphery of the 16-cm/32-cm phantom (on average). Conclusions: Compared with CTDI{sub vol}, the equilibrium dose for large scan lengths is 30.5% higher in the 32-cm phantom, and is 20.0% higher in the 16-cm phantom. The relative increases are practically independent of tube voltages (80-140 kV), beam widths (up to 4 cm), and the CT scanners covered in this study.« less
Cho, Hyo-Min; Ding, Huanjun; Barber, William C; Iwanczyk, Jan S; Molloi, Sabee
2015-07-01
To investigate the feasibility of detecting breast microcalcification (μCa) with a dedicated breast computed tomography (CT) system based on energy-resolved photon-counting silicon (Si) strip detectors. The proposed photon-counting breast CT system and a bench-top prototype photon-counting breast CT system were simulated using a simulation package written in matlab to determine the smallest detectable μCa. A 14 cm diameter cylindrical phantom made of breast tissue with 20% glandularity was used to simulate an average-sized breast. Five different size groups of calcium carbonate grains, from 100 to 180 μm in diameter, were simulated inside of the cylindrical phantom. The images were acquired with a mean glandular dose (MGD) in the range of 0.7-8 mGy. A total of 400 images was used to perform a reader study. Another simulation study was performed using a 1.6 cm diameter cylindrical phantom to validate the experimental results from a bench-top prototype breast CT system. In the experimental study, a bench-top prototype CT system was constructed using a tungsten anode x-ray source and a single line 256-pixels Si strip photon-counting detector with a pixel pitch of 100 μm. Calcium carbonate grains, with diameter in the range of 105-215 μm, were embedded in a cylindrical plastic resin phantom to simulate μCas. The physical phantoms were imaged at 65 kVp with an entrance exposure in the range of 0.6-8 mGy. A total of 500 images was used to perform another reader study. The images were displayed in random order to three blinded observers, who were asked to give a 4-point confidence rating on each image regarding the presence of μCa. The μCa detectability for each image was evaluated by using the average area under the receiver operating characteristic curve (AUC) across the readers. The simulation results using a 14 cm diameter breast phantom showed that the proposed photon-counting breast CT system can achieve high detection accuracy with an average AUC greater than 0.89 ± 0.07 for μCas larger than 120 μm in diameter at a MGD of 3 mGy. The experimental results using a 1.6 cm diameter breast phantom showed that the prototype system can achieve an average AUC greater than 0.98 ± 0.01 for μCas larger than 140 μm in diameter using an entrance exposure of 1.2 mGy. The proposed photon-counting breast CT system based on a Si strip detector can potentially offer superior image quality to detect μCa with a lower dose level than a standard two-view mammography.
Lung lobe modeling and segmentation with individualized surface meshes
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael
2008-03-01
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoro, J. P.; McNamara, J.; Yorke, E.
2012-10-15
Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged imagesmore » for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction, seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.« less
Arellano, Ronald S; Garcia, Rodrigo G; Gervais, Debra A; Mueller, Peter R
2009-12-01
The objective of this study was to evaluate the effectiveness of CT-guided injection of 5% dextrose in water solution (D5W) into the retroperitoneum to displace organs adjacent to renal cell carcinoma. An interventional radiology database was searched to identify the cases of patients who underwent CT-guided percutaneous radiofrequency ablation of biopsy-proven renal cell carcinoma in which D5W was injected into the retroperitoneal space to displace structures away from the targeted renal tumor. The number of organs displaced and the distance between the renal tumor and adjacent organs before and after displacement with D5W were assessed. The cases of 135 patients with 139 biopsy-proven renal cell carcinomas who underwent 154 percutaneous CT-guided radiofrequency ablation procedures were found in the search. Thirty-one patients with 33 renal cell carcinomas underwent 36 ablation procedures after injection of D5W into the retroperitoneal space. Fifty-five organs were displaced away from renal cell carcinoma with this technique. The average distance between adjacent structures and renal cell carcinomas before displacement was 0.36 cm (range, 0.1-1.0 cm). The average distance between structures and adjacent renal cell carcinomas after displacement was 1.94 cm (range, 1.1-4.3 cm) (p < 0.0001). The average volume of D5W used to achieve organ displacement was 273.5 mL. No complications were associated with this technique. CT-guided injection of D5W into the retroperitoneum is an effective method for displacing vital structures away from renal cell carcinoma.
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, R; Ding, C; Jiang, S
Purpose Spine SRS/SAbR treatment plans typically require very steep dose gradients to meet spinal cord constraints and it is crucial that the dose distribution be accurate. However, these plans are typically calculated on helical free-breathing CT scans, which often contain motion artifacts. While the spine itself doesn’t exhibit very much intra-fraction motion, tissues around the spine, particularly the liver, do move with respiration. We investigated the dosimetric effect of liver motion on dose distributions calculated on helical free-breathing CT scans for spine SAbR delivered to the T and L spine. Methods We took 5 spine SAbR plans and used densitymore » overrides to simulate an average reconstruction CT image set, which would more closely represent the patient anatomy during treatment. The value used for the density override was 0.66 g/cc. All patients were planned using our standard beam arrangement, which consists of 13 coplanar step and shoot IMRT beams. The original plan was recalculated with the same MU on the “average” scan and target coverage and spinal cord dose were compared to the original plan. Results The average changes in minimum PTV dose, PTV coverage, max cord dose and volume of cord receiving 10 Gy were 0.6%, 0.8%, 0.3% and 4.4% (0.012 cc), respectively. Conclusion SAbR spine plans are surprisingly robust relative to surrounding organ motion due to respiration. Motion artifacts in helical planning CT scans do not cause clinically significant differences when these plans are re-calculated on pseudo-average CT reconstructions. This is likely due to the beam arrangement used because only three beams pass through the liver and only one beam passes completely through the density override. The effect of the respiratory motion on VMAT plans for spine SAbR is being evaluated.« less
Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S
2016-10-01
Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.
3-d interpolation in object perception: evidence from an objective performance paradigm.
Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana
2005-06-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).
Diaphragm motion quantification in megavoltage cone-beam CT projection images.
Chen, Mingqing; Siochi, R Alfredo
2010-05-01
To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.
Zhang, Yakun; Li, Xiang; Segars, W. Paul; Samei, Ehsan
2014-01-01
Purpose: Given the radiation concerns inherent to the x-ray modalities, accurately estimating the radiation doses that patients receive during different imaging modalities is crucial. This study estimated organ doses, effective doses, and risk indices for the three clinical chest x-ray imaging techniques (chest radiography, tomosynthesis, and CT) using 59 anatomically variable voxelized phantoms and Monte Carlo simulation methods. Methods: A total of 59 computational anthropomorphic male and female extended cardiac-torso (XCAT) adult phantoms were used in this study. Organ doses and effective doses were estimated for a clinical radiography system with the capability of conducting chest radiography and tomosynthesis (Definium 8000, VolumeRAD, GE Healthcare) and a clinical CT system (LightSpeed VCT, GE Healthcare). A Monte Carlo dose simulation program (PENELOPE, version 2006, Universitat de Barcelona, Spain) was used to mimic these two clinical systems. The Duke University (Durham, NC) technique charts were used to determine the clinical techniques for the radiographic modalities. An exponential relationship between CTDIvol and patient diameter was used to determine the absolute dose values for CT. The simulations of the two clinical systems compute organ and tissue doses, which were then used to calculate effective dose and risk index. The calculation of the two dose metrics used the tissue weighting factors from ICRP Publication 103 and BEIR VII report. Results: The average effective dose of the chest posteroanterior examination was found to be 0.04 mSv, which was 1.3% that of the chest CT examination. The average effective dose of the chest tomosynthesis examination was found to be about ten times that of the chest posteroanterior examination and about 12% that of the chest CT examination. With increasing patient average chest diameter, both the effective dose and risk index for CT increased considerably in an exponential fashion, while these two dose metrics only increased slightly for radiographic modalities and for chest tomosynthesis. Effective and organ doses normalized to mAs all illustrated an exponential decrease with increasing patient size. As a surface organ, breast doses had less correlation with body size than that of lungs or liver. Conclusions: Patient body size has a much greater impact on radiation dose of chest CT examinations than chest radiography and tomosynthesis. The size of a patient should be considered when choosing the best thoracic imaging modality. PMID:24506654
Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2018-06-01
Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, A; Boone, J
Purpose: To implement a 3D beam modulation filter (3D-BMF) in dedicated breast CT (bCT) and develop a method for conforming the patient’s breast to a pre-defined shape, optimizing the effects of the filter. This work expands on previous work reporting the methodology for designing a 3D-BMF that can spare unnecessary dose and improve signal equalization at the detector by preferentially filtering the beam in the thinner anterior and peripheral breast regions. Methods: Effective diameter profiles were measured for 219 segmented bCT images, grouped into volume quintiles, and averaged within each group to represent the range of breast sizes found clinically.more » These profiles were then used to generate five size-specific computational phantoms and fabricate five size-specific UHMW phantoms. Each computational phantom was utilized for designing a size-specific 3D-BMF using previously reported methods. Glandular dose values and projection images were simulated in MCNP6 with and without the 3DBMF using the system specifications of our prototype bCT scanner “Doheny”. Lastly, thermoplastic was molded around each of the five phantom sizes and used to produce a series of breast immobilizers for use in conforming the patient’s breast during bCT acquisition. Results: After incorporating the 3D-BMF, MC simulations estimated an 80% average reduction in the detector dynamic range requirements across all phantom sizes. The glandular dose was reduced on average 57% after normalizing by the number of quanta reaching the detector under the thickest region of the breast. Conclusion: A series of bCT-derived breast phantoms were used to design size-specific 3D-BMFs and breast immobilizers that can be used on the bCT platform to conform the patient’s breast and therefore optimally exploit the benefits of the 3D-BMF. Current efforts are focused on fabricating several prototype 3D-BMFs and performing phantom scans on Doheny for MC simulation validation and image quality analysis. Research reported in this paper was supported in part by the National Cancer Institute of the National Institutes of Health under award R01CA181081. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institue of Health.« less
Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F
2011-01-01
Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365
Chen, Jiang-Hong; Jin, Er-Hu; He, Wen; Zhao, Li-Qin
2014-01-01
Objective To reduce radiation dose while maintaining image quality in low-dose chest computed tomography (CT) by combining adaptive statistical iterative reconstruction (ASIR) and automatic tube current modulation (ATCM). Methods Patients undergoing cancer screening (n = 200) were subjected to 64-slice multidetector chest CT scanning with ASIR and ATCM. Patients were divided into groups 1, 2, 3, and 4 (n = 50 each), with a noise index (NI) of 15, 20, 30, and 40, respectively. Each image set was reconstructed with 4 ASIR levels (0% ASIR, 30% ASIR, 50% ASIR, and 80% ASIR) in each group. Two radiologists assessed subjective image noise, image artifacts, and visibility of the anatomical structures. Objective image noise and signal-to-noise ratio (SNR) were measured, and effective dose (ED) was recorded. Results Increased NI was associated with increased subjective and objective image noise results (P<0.001), and SNR decreased with increasing NI (P<0.001). These values improved with increased ASIR levels (P<0.001). Images from all 4 groups were clinically diagnosable. Images with NI = 30 and 50% ASIR had average subjective image noise scores and nearly average anatomical structure visibility scores, with a mean objective image noise of 23.42 HU. The EDs for groups 1, 2, 3 and 4 were 2.79±1.17, 1.69±0.59, 0.74±0.29, and 0.37±0.22 mSv, respectively. Compared to group 1 (NI = 15), the ED reductions were 39.43%, 73.48%, and 86.74% for groups 2, 3, and 4, respectively. Conclusions Using NI = 30 with 50% ASIR in the chest CT protocol, we obtained average or above-average image quality but a reduced ED. PMID:24691208
Chen, Jiang-Hong; Jin, Er-Hu; He, Wen; Zhao, Li-Qin
2014-01-01
To reduce radiation dose while maintaining image quality in low-dose chest computed tomography (CT) by combining adaptive statistical iterative reconstruction (ASIR) and automatic tube current modulation (ATCM). Patients undergoing cancer screening (n = 200) were subjected to 64-slice multidetector chest CT scanning with ASIR and ATCM. Patients were divided into groups 1, 2, 3, and 4 (n = 50 each), with a noise index (NI) of 15, 20, 30, and 40, respectively. Each image set was reconstructed with 4 ASIR levels (0% ASIR, 30% ASIR, 50% ASIR, and 80% ASIR) in each group. Two radiologists assessed subjective image noise, image artifacts, and visibility of the anatomical structures. Objective image noise and signal-to-noise ratio (SNR) were measured, and effective dose (ED) was recorded. Increased NI was associated with increased subjective and objective image noise results (P<0.001), and SNR decreased with increasing NI (P<0.001). These values improved with increased ASIR levels (P<0.001). Images from all 4 groups were clinically diagnosable. Images with NI = 30 and 50% ASIR had average subjective image noise scores and nearly average anatomical structure visibility scores, with a mean objective image noise of 23.42 HU. The EDs for groups 1, 2, 3 and 4 were 2.79 ± 1.17, 1.69 ± 0.59, 0.74 ± 0.29, and 0.37 ± 0.22 mSv, respectively. Compared to group 1 (NI = 15), the ED reductions were 39.43%, 73.48%, and 86.74% for groups 2, 3, and 4, respectively. Using NI = 30 with 50% ASIR in the chest CT protocol, we obtained average or above-average image quality but a reduced ED.
Hany, Thomas F; Gharehpapagh, Esmaiel; Kamel, Ehab M; Buck, Alfred; Himms-Hagen, Jean; von Schulthess, Gustav K
2002-10-01
Increased symmetrical fluorine-18 fluorodeoxyglucose (FDG) uptake in the cervical and thoracic spine region is well known and has been attributed to muscular uptake. The purpose of this study was to re-evaluate this FDG uptake pattern by means of co-registered positron emission tomography (PET) and computed tomography (CT) imaging, which allowed exact localisation of this uptake. Between April and November 2001, 638 consecutive patients referred for PET/CT were imaged on an in-line PET/CT system (GEMS). This system combines an advanced GE PET scanner and a multirow-detector computer tomograph (Lightspeed, GEMS). The examination included PET with FDG and one CT acquisition with 80 mA. For CT, the following parameters were used: 140 kV, 80 mA, reconstructed slice thickness 5 mm, scan length 867 mm, AT 22.5 s. CT data were used for attenuation correction as well as image co-registration. Image analysis was performed on an Entegra work-station (ELGEMS). All patients with symmetrical uptake within the neck, thorax and shoulder regions were selected and the exact localisation of uptake determined (muscle, bone, fatty tissue or articulation). In 17 of the 638 patients (2.5%), increased, symmetrical FDG uptake in the shoulder region in a typical pattern was found. If extensive, this pattern included FDG activity comparable to brain activity in the lower cervical spine, the shoulder region and the upper thoracic spine in the costovertebral region. A less extensive pattern only involved intermediate FDG uptake in the lower cervical spine and shoulder region or in the shoulder region alone. In seven female patients (average 32.3 years), the extensive uptake pattern was seen. The average body mass index (BMI) was 19.0 (range 16.8-23.4). In the other ten patients (two male, eight female, average age 37.1 years), the average BMI was 22.7 (18.7-27.7). In all patients, the soft tissue uptake was clearly localised within the fatty tissue of the shoulders as demonstrated by PET/CT co-registration. The uptake in the region of the thoracic spine was localised in the region of the costovertebral joints. Symmetrical FDG uptake in the shoulder, neck and thoracic spine region is probably related to uptake in adipose tissue, especially in underweight patients. Hypothetically, this FDG uptake could represent activated brown adipose tissue during increased sympathetic nerve system (SNS) activity due to cold stress.
Natural Language-based Machine Learning Models for the Annotation of Clinical Radiology Reports.
Zech, John; Pain, Margaret; Titano, Joseph; Badgeley, Marcus; Schefflein, Javin; Su, Andres; Costa, Anthony; Bederson, Joshua; Lehar, Joseph; Oermann, Eric Karl
2018-05-01
Purpose To compare different methods for generating features from radiology reports and to develop a method to automatically identify findings in these reports. Materials and Methods In this study, 96 303 head computed tomography (CT) reports were obtained. The linguistic complexity of these reports was compared with that of alternative corpora. Head CT reports were preprocessed, and machine-analyzable features were constructed by using bag-of-words (BOW), word embedding, and Latent Dirichlet allocation-based approaches. Ultimately, 1004 head CT reports were manually labeled for findings of interest by physicians, and a subset of these were deemed critical findings. Lasso logistic regression was used to train models for physician-assigned labels on 602 of 1004 head CT reports (60%) using the constructed features, and the performance of these models was validated on a held-out 402 of 1004 reports (40%). Models were scored by area under the receiver operating characteristic curve (AUC), and aggregate AUC statistics were reported for (a) all labels, (b) critical labels, and (c) the presence of any critical finding in a report. Sensitivity, specificity, accuracy, and F1 score were reported for the best performing model's (a) predictions of all labels and (b) identification of reports containing critical findings. Results The best-performing model (BOW with unigrams, bigrams, and trigrams plus average word embeddings vector) had a held-out AUC of 0.966 for identifying the presence of any critical head CT finding and an average 0.957 AUC across all head CT findings. Sensitivity and specificity for identifying the presence of any critical finding were 92.59% (175 of 189) and 89.67% (191 of 213), respectively. Average sensitivity and specificity across all findings were 90.25% (1898 of 2103) and 91.72% (18 351 of 20 007), respectively. Simpler BOW methods achieved results competitive with those of more sophisticated approaches, with an average AUC for presence of any critical finding of 0.951 for unigram BOW versus 0.966 for the best-performing model. The Yule I of the head CT corpus was 34, markedly lower than that of the Reuters corpus (at 103) or I2B2 discharge summaries (at 271), indicating lower linguistic complexity. Conclusion Automated methods can be used to identify findings in radiology reports. The success of this approach benefits from the standardized language of these reports. With this method, a large labeled corpus can be generated for applications such as deep learning. © RSNA, 2018 Online supplemental material is available for this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi Linxi; Vedantham, Srinivasan; Karellas, Andrew
Purpose: To determine the mean and range of location-averaged breast skin thickness using high-resolution dedicated breast CT for use in Monte Carlo-based estimation of normalized glandular dose coefficients. Methods: This study retrospectively analyzed image data from a clinical study investigating dedicated breast CT. An algorithm similar to that described by Huang et al.['The effect of skin thickness determined using breast CT on mammographic dosimetry,' Med. Phys. 35(4), 1199-1206 (2008)] was used to determine the skin thickness in 137 dedicated breast CT volumes from 136 women. The location-averaged mean breast skin thickness for each breast was estimated and the study populationmore » mean and range were determined. Pathology results were available for 132 women, and were used to investigate if the distribution of location-averaged mean breast skin thickness varied with pathology. The effect of surface fitting to account for breast curvature was also studied. Results: The study mean ({+-} interbreast SD) for breast skin thickness was 1.44 {+-} 0.25 mm (range: 0.87-2.34 mm), which was in excellent agreement with Huang et al. Based on pathology, pair-wise statistical analysis (Mann-Whitney test) indicated that at the 0.05 significance level, there were no significant difference in the location-averaged mean breast skin thickness distributions between the groups: benign vs malignant (p= 0.223), benign vs hyperplasia (p= 0.651), hyperplasia vs malignant (p= 0.229), and malignant vs nonmalignant (p= 0.172). Conclusions: Considering this study used a different clinical prototype system, and the study participants were from a different geographical location, the observed agreement between the two studies suggests that the choice of 1.45 mm thick skin layer comprising the epidermis and the dermis for breast dosimetry is appropriate. While some benign and malignant conditions could cause skin thickening, in this study cohort the location-averaged mean breast skin thickness distributions did not differ significantly with pathology. The study also underscored the importance of considering breast curvature in estimating breast skin thickness.« less
Estimating the cost of informal caregiving for elderly patients with cancer.
Hayman, J A; Langa, K M; Kabeto, M U; Katz, S J; DeMonner, S M; Chernew, M E; Slavin, M B; Fendrick, A M
2001-07-01
As the United States population ages, the increasing prevalence of cancer is likely to result in higher direct medical and nonmedical costs. Although estimates of the associated direct medical costs exist, very little information is available regarding the prevalence, time, and cost associated with informal caregiving for elderly cancer patients. To estimate these costs, we used data from the first wave (1993) of the Asset and Health Dynamics (AHEAD) Study, a nationally representative longitudinal survey of people aged 70 or older. Using a multivariable, two-part regression model to control for differences in health and functional status, social support, and sociodemographics, we estimated the probability of receiving informal care, the average weekly number of caregiving hours, and the average annual caregiving cost per case (assuming an average hourly wage of $8.17) for subjects who reported no history of cancer (NC), having a diagnosis of cancer but not receiving treatment for their cancer in the last year (CNT), and having a diagnosis of cancer and receiving treatment in the last year (CT). Of the 7,443 subjects surveyed, 6,422 (86%) reported NC, 718 (10%) reported CNT, and 303 (4%) reported CT. Whereas the adjusted probability of informal caregiving for those respondents reporting NC and CNT was 26%, it was 34% for those reporting CT (P <.05). Those subjects reporting CT received an average of 10.0 hours of informal caregiving per week, as compared with 6.9 and 6.8 hours for those who reported NC and CNT, respectively (P <.05). Accordingly, cancer treatment was associated with an incremental increase of 3.1 hours per week, which translates into an additional average yearly cost of $1,200 per patient and just over $1 billion nationally. Informal caregiving costs are substantial and should be considered when estimating the cost of cancer treatment in the elderly.
Low agreement of visual rating for detailed quantification of pulmonary emphysema in whole-lung CT.
Mascalchi, Mario; Diciotti, Stefano; Sverzellati, Nicola; Camiciottoli, Gianna; Ciccotosto, Cesareo; Falaschi, Fabio; Zompatori, Maurizio
2012-02-01
Multidetector spiral computed tomography (CT) has opened the possibility of quantitative evaluation of emphysema extent in the whole lung. Visual assessment can be used for such a purpose, but its reproducibility has not been established. To assess agreement of detailed assessment of pulmonary emphysema on whole-lung CT using a visual scale. Thirty patients with chronic obstructive pulmonary disease underwent whole-lung inspiratory CT. Four chest radiologists rated the same 22 ± 2 thin sections using a visual scale which defines a range of emphysema extent between 0 and 100. Two of them repeated the rating two months later. Inter- and intra-operator agreement was evaluated with the Bland and Altman method. In addition, the percentage of emphysema at -950 Hounsfield units in the whole lung was determined using fully automated commercially available software for 3D densitometry. In three of six operator pairs and in one of two intra-operator pairs the Kendall τ test showed a significant correlation between the difference and the average magnitude of visual scores. Among different operators the half-width of 95% limits of agreement (95% LoA) was wide ranging between a score of 14.2-27.7 for an average visual score of 20 and between 18.5-36.8 for an average visual score of 80. Within the same operator the half-width of 95% LoA ranged between a score of 10.9-21.0 for an average visual score of 20 and between 25.1-30.1 for an average visual score of 80. The visual scores of the four radiologists were correlated with the results of densitometry (P < 0.001; r = 0.65-0.81). The inter- and intra-operator agreement of detailed assessment of emphysema in the whole lung using a visual scale is low and decreases with increasing emphysema extent.
SU-G-IeP2-10: Lens Dose Reduction by Patient Position Modification During Neck CT Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, E; Lee, C; Butman, J
Purpose: Irradiation of the lens during a neck CT may increase a patient’s risk of developing cataracts later in life. Radiologists and technologists at the National Institutes of Health Clinical Center (NIHCC) have developed new CT imaging protocols that include a reduction in scan range and modifying neck positioning using a head tilt. This study will evaluate the efficacy of this protocol in the reduction of lens dose. Methods: We retrieved CT images of five male patients who had two sets of CT images: before and after the implementation of the new protocol. The lens doses before the new protocolmore » were calculated using an in-house CT dose calculator, National Cancer Institute dosimetry system for CT (NCICT), where computational human phantoms with no head tilt are included. We also calculated the lens dose for the patient CT conducted after the new protocol by using an adult male computational phantom with the neck position deformed to match the angle of the head tilt. We also calculated the doses to other radiosensitive organs including the globes of the eye, brain, pituitary gland and salivary glands before and after head tilt. Results: Our dose calculations demonstrated that modifying neck position reduced dose to the lens by 89% on average (range: 86–96%). Globe, brain, pituitary and salivary gland doses also decreased by an average of 65% (51–95%), 38% (−8–66%), 34% (−43–84%) and 14% (13–14%), respectively. The new protocol resulted in a nearly ten-fold decrease in lens dose. Conclusion: The use of a head tilt and scan range reduction is an easy and effective method to reduce radiation exposure to the lens and other radiosensitive organs, while still allowing for the inclusion of critical neck structures in the CT image. We are expanding our study to a total of 10 males and 10 females.« less
NASA Astrophysics Data System (ADS)
Kyselý, Jan; Plavcová, Eva
2010-12-01
The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.
Spotting L3 slice in CT scans using deep convolutional network and transfer learning.
Belharbi, Soufiane; Chatelain, Clément; Hérault, Romain; Adam, Sébastien; Thureau, Sébastien; Chastan, Mathieu; Modzelewski, Romain
2017-08-01
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
Motion vector field upsampling for improved 4D cone-beam CT motion compensation of the thorax
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Rank, Christopher M.; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2017-03-01
To improve the accuracy of motion vector fields (MVFs) required for respiratory motion compensated (MoCo) CT image reconstruction without increasing the computational complexity of the MVF estimation approach, we propose a MVF upsampling method that is able to reduce the motion blurring in reconstructed 4D images. While respiratory gating improves the temporal resolution, it leads to sparse view sampling artifacts. MoCo image reconstruction has the potential to remove all motion artifacts while simultaneously making use of 100% of the rawdata. However the MVF accuracy is still below the temporal resolution of the CBCT data acquisition. Increasing the number of motion bins would increase reconstruction time and amplify sparse view artifacts, but not necessarily the accuracy of MVF. Therefore we propose a new method to upsample estimated MVFs and use those for MoCo. To estimate the MVFs, a modified version of the Demons algorithm is used. Our proposed method is able to interpolate the original MVFs up to a factor that each projection has its own individual MVF. To validate the method we use an artificially deformed clinical CT scan, with a breathing pattern of a real patient, and patient data acquired with a TrueBeamTM4D CBCT system (Varian Medical Systems). We evaluate our method for different numbers of respiratory bins, each again with different upsampling factors. Employing our upsampling method, motion blurring in the reconstructed 4D images, induced by irregular breathing and the limited temporal resolution of phase-correlated images, is substantially reduced.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.
Interactive semiautomatic contour delineation using statistical conditional random fields framework.
Hu, Yu-Chi; Grossberg, Michael D; Wu, Abraham; Riaz, Nadeem; Perez, Carmen; Mageras, Gig S
2012-07-01
Contouring a normal anatomical structure during radiation treatment planning requires significant time and effort. The authors present a fast and accurate semiautomatic contour delineation method to reduce the time and effort required of expert users. Following an initial segmentation on one CT slice, the user marks the target organ and nontarget pixels with a few simple brush strokes. The algorithm calculates statistics from this information that, in turn, determines the parameters of an energy function containing both boundary and regional components. The method uses a conditional random field graphical model to define the energy function to be minimized for obtaining an estimated optimal segmentation, and a graph partition algorithm to efficiently solve the energy function minimization. Organ boundary statistics are estimated from the segmentation and propagated to subsequent images; regional statistics are estimated from the simple brush strokes that are either propagated or redrawn as needed on subsequent images. This greatly reduces the user input needed and speeds up segmentations. The proposed method can be further accelerated with graph-based interpolation of alternating slices in place of user-guided segmentation. CT images from phantom and patients were used to evaluate this method. The authors determined the sensitivity and specificity of organ segmentations using physician-drawn contours as ground truth, as well as the predicted-to-ground truth surface distances. Finally, three physicians evaluated the contours for subjective acceptability. Interobserver and intraobserver analysis was also performed and Bland-Altman plots were used to evaluate agreement. Liver and kidney segmentations in patient volumetric CT images show that boundary samples provided on a single CT slice can be reused through the entire 3D stack of images to obtain accurate segmentation. In liver, our method has better sensitivity and specificity (0.925 and 0.995) than region growing (0.897 and 0.995) and level set methods (0.912 and 0.985) as well as shorter mean predicted-to-ground truth distance (2.13 mm) compared to regional growing (4.58 mm) and level set methods (8.55 mm and 4.74 mm). Similar results are observed in kidney segmentation. Physician evaluation of ten liver cases showed that 83% of contours did not need any modification, while 6% of contours needed modifications as assessed by two or more evaluators. In interobserver and intraobserver analysis, Bland-Altman plots showed our method to have better repeatability than the manual method while the delineation time was 15% faster on average. Our method achieves high accuracy in liver and kidney segmentation and considerably reduces the time and labor required for contour delineation. Since it extracts purely statistical information from the samples interactively specified by expert users, the method avoids heuristic assumptions commonly used by other methods. In addition, the method can be expanded to 3D directly without modification because the underlying graphical framework and graph partition optimization method fit naturally with the image grid structure.
Narrowband signal detection in the SETI field test
NASA Technical Reports Server (NTRS)
Cullers, D. Kent; Deans, Stanley R.
1986-01-01
Various methods for detecting narrow-band signals are evaluated. The characteristics of synchronized and unsynchronized pulses are examined. Synchronous, square law, regular pulse, and the general form detections are discussed. The CW, single pulse, synchronous, and four pulse detections are analyzed in terms of false alarm rate and threshold relative to average noise power. Techniques for saving memory and retaining sensitivity are described. Consideration is given to nondrifting CW detection, asynchronous pulse detection, interpolative and extrapolative pulse detectors, and finite and infinite pulses.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
Psychoemotional Features of a Doubtful Disorder: Functional Dyspepsia
Dragos, D; Ionescu, O; Micut, R; Ojog, DG; Tanasescu, MD
2012-01-01
Objective. To delineate the psychological profile of individuals prone to FD-like symptoms (FDLS). Method. A triple questionnaire of 614 items (including psychological and medical ones) was given to 10192 respondents, the results were analyzed by means of Cronbach alpha, and Chi square test, together with an ad-hoc designed method that implied ranking and outliers detecting. Results and conclusions. FDLS appears to be an accompanying feature of many (if not most) human emotions and are more frequent in anxious, timid, pessimistic, discontent, irascible, tense, success-doubting, unexpected-dreading individuals, bothered by persistent thoughts and tormented by the professional requirements and the lack of time. A higher degree of specificity might have: chiefly fear of failure, susceptibility, and tension, secondarily emotivity, fear of unpredictable events, sense of insufficient time, preoccupation with authority factors, and tendency to endure unacceptable situations, and also faulty patience and lack of punctuality. Rumination appears to be the psychological tendency most strongly associated with FD. Nocturnal epigastric pain seems to indicate a submissive nature but a rather responsibilities-free childhood, while early satiety is associated with inclination to work and responsibility and preoccupation with self-image. The superposition of FD symptoms with biliary and esophageal symptoms cast a doubt over the distinctness and even the materiality of the various functional digestive disorders. Abbreviations: ChiSq = chi-square; CrA = Cronbach alpha; OdRa = odds ratio; OdRaCL = OdRa confidence limits; E = exponential (for the sake of legibility we have used the exponential notation throughout this article; i.e. 4E-28 = 4×10-28); ErrProb = probability of error; SS = statistically significant; SD = standard deviation; a / m = the calculations were done by taking into account the average/ maximal score; P / M = psychological / medical category; PaMm / PmMa / PmMm / PaMa = the calculations were done by taking into account the average score for the PsyCt and the maximal score for the MedCt / the maximal score for PsyCt and the average score for the MedCt / and the maximal score for both / and the average score for both; R = the calculations were done for the FD_res category. FD = functional dyspepsia; FD_res / FD_ext = restricted / extended variant of the group of FD items; FDCt = FD category; FDLS = FD-like symptoms; MedCt / MedIt = medical category / item; PsyCt / PsyIt = psychological category / item; PMID:23144666
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, T; Boone, J; Kent, M
Purpose: Pulmonary perfusion imaging has provided significant insights into pulmonary diseases, and can be useful in radiotherapy. The purpose of this study was to prospectively establish proof-of-principle in a canine model for single-energy CT-based perfusion imaging, which has the potential for widespread clinical implementation. Methods: Single-energy CT perfusion imaging is based on: (1) acquisition of inspiratory breath-hold CT scans before and after intravenous injection of iodinated contrast medium, (2) deformable image registration (DIR) of the two CT image data sets, and (3) subtraction of the pre-contrast image from post-contrast image, yielding a map of Hounsfield unit (HU) enhancement. These subtractionmore » image data sets hypothetically represent perfused blood volume, a surrogate for perfusion. In an IACUC-approved clinical trial, we acquired pre- and post-contrast CT scans in the prone posture for six anesthetized, mechanically-ventilated dogs. The elastix algorithm was used for DIR. The registration accuracy was quantified using the target registration errors (TREs) for 50 pulmonary landmarks in each dog. The gradient of HU enhancement between gravity-dependent (ventral) and non-dependent (dorsal) regions was evaluated to quantify the known effect of gravity, i.e., greater perfusion in ventral regions. Results: The lung volume difference between the two scans was 4.3±3.5% on average (range 0.3%–10.1%). DIR demonstrated an average TRE of 0.7±1.0 mm. HU enhancement in lung parenchyma was 34±10 HU on average and varied considerably between individual dogs, indicating the need for improvement of the contrast injection protocol. HU enhancement in ventral (gravity-dependent) regions was found to be greater than in dorsal regions. A population average ventral-to-dorsal gradient of HU enhancement was strong (R{sup 2}=0.94) and statistically significant (p<0.01). Conclusion: This canine study demonstrated relatively accurate DIR and a strong ventral-to-dorsal gradient of HU enhancement, providing proof-of-principle for single-energy CT pulmonary perfusion imaging. This ongoing study will enroll more dogs and investigate the physiological significance. This study was supported by a Philips Healthcare/Radiological Society of North America (RSNA) Research Seed Grant (RSD1458)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, K; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: The aim of this study was to confirm On-Board Imager cone-beam computed tomography (CBCT) using a histogram-matching algorithm as a useful method for proton dose calculation in head and neck radiotherapy. Methods: We studied one head and neck phantom and ten patients with head and neck cancer treated using intensity-modulated radiation therapy (IMRT) and proton beam therapy. We modified Hounsfield unit (HU) values of CBCT (mCBCT) using a histogram-matching algorithm. In order to evaluate the accuracy of the proton dose calculation, we compared dose differences in dosimetric parameters (Dmean) for clinical target volume (CTV), planning target volume (PTV) andmore » left parotid and proton ranges (PR) between the planning CT (reference) and CBCT or mCBCT, and gamma passing rates of CBCT and mCBCT. To minimize the effect of organ deformation, we also performed image registration. Results: For patients, the average differences in Dmean for CTV, PTV, and left parotid between planning CT and CBCT were 1.63 ± 2.34%, 3.30 ± 1.02%, and 5.42 ± 3.06%, respectively. Similarly, the average differences between planning CT and mCBCT were 0.20 ± 0.19%, 0.58 ±0.43%, and 3.53 ±2.40%, respectively. The average differences in PR between planning CT and CBCT or mCBCT of a 50° beam for ten patients were 2.1 ± 2.1 mm and 0.3 ± 0.5 mm, respectively. Similarly, the average differences in PR of a 120° beam were 2.9 ± 2.6 mm and 1.1 ± 0.9 mm, respectively. The average dose and PR differences of mCBCT were smaller than those of CBCT. Additionally, the average gamma passing rates of mCBCT were larger than those of CBCT. Conclusion: We evaluated the accuracy of the proton dose calculation in CBCT and mCBCT with the image registration for ten patients. Our results showed that HU modification using a histogram-matching algorithm could improve the accuracy of the proton dose calculation.« less
Evaluation of a semiautomated lung mass calculation technique for internal dosimetry applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busse, Nathan; Erwin, William; Pan, Tinsu
2013-12-15
Purpose: The authors sought to evaluate a simple, semiautomated lung mass estimation method using computed tomography (CT) scans obtained using a variety of acquisition techniques and reconstruction parameters for mass correction of medical internal radiation dose-based internal radionuclide radiation absorbed dose estimates.Methods: CT scans of 27 patients with lung cancer undergoing stereotactic body radiation therapy treatment planning with PET/CT were analyzed retrospectively. For each patient, free-breathing (FB) and respiratory-gated 4DCT scans were acquired. The 4DCT scans were sorted into ten respiratory phases, representing one complete respiratory cycle. An average CT reconstruction was derived from the ten-phase reconstructions. Mid expiration breath-holdmore » CT scans were acquired in the same session for many patients. Deep inspiration breath-hold diagnostic CT scans of many of the patients were obtained from different scanning sessions at similar time points to evaluate the effect of contrast administration and maximum inspiration breath-hold. Lung mass estimates were obtained using all CT scan types, and intercomparisons made to assess lung mass variation according to scan type. Lung mass estimates using the FB CT scans from PET/CT examinations of another group of ten male and ten female patients who were 21–30 years old and did not have lung disease were calculated and compared with reference lung mass values. To evaluate the effect of varying CT acquisition and reconstruction parameters on lung mass estimation, an anthropomorphic chest phantom was scanned and reconstructed with different CT parameters. CT images of the lungs were segmented using the OsiriX MD software program with a seed point of about −850 HU and an interval of 1000. Lung volume, and mean lung, tissue, and air HUs were recorded for each scan. Lung mass was calculated by assuming each voxel was a linear combination of only air and tissue. The specific gravity of lung volume was calculated using the formula (lung HU − air HU)/(tissue HU − air HU), and mass = specific gravity × total volume × 1.04 g/cm{sup 3}.Results: The range of calculated lung masses was 0.51–1.29 kg. The average male and female lung masses during FB CT were 0.80 and 0.71 kg, respectively. The calculated lung mass varied across the respiratory cycle but changed to a lesser degree than did lung volume measurements (7.3% versus 15.4%). Lung masses calculated using deep inspiration breath-hold and average CT were significantly larger (p < 0.05) than were some masses calculated using respiratory-phase and FB CT. Increased voxel size and smooth reconstruction kernels led to high lung mass estimates owing to partial volume effects.Conclusions: Organ mass correction is an important component of patient-specific internal radionuclide dosimetry. Lung mass calculation necessitates scan-based density correction to account for volume changes owing to respiration. The range of lung masses in the authors’ patient population represents lung doses for the same absorbed energy differing from 25% below to 64% above the dose found using reference phantom organ masses. With proper management of acquisition parameters and selection of FB or midexpiration breath hold scans, lung mass estimates with about 10% population precision may be achieved.« less
EOS MLS Level 1B Data Processing Software. Version 3
NASA Technical Reports Server (NTRS)
Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina
2011-01-01
This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.
Ameliorating slice gaps in multislice magnetic resonance images: an interpolation scheme.
Kashou, Nasser H; Smith, Mark A; Roberts, Cynthia J
2015-01-01
Standard two-dimension (2D) magnetic resonance imaging (MRI) clinical acquisition protocols utilize orthogonal plane images which contain slice gaps (SG). The purpose of this work is to introduce a novel interpolation method for these orthogonal plane MRI 2D datasets. Three goals can be achieved: (1) increasing the resolution based on a priori knowledge of scanning protocol, (2) ameliorating the loss of data as a result of SG and (3) reconstructing a three-dimension (3D) dataset from 2D images. MRI data was collected using a 3T GE scanner and simulated using Matlab. The procedure for validating the MRI data combination algorithm was performed using a Shepp-Logan and a Gaussian phantom in both 2D and 3D of varying matrix sizes (64-512), as well as on one MRI dataset of a human brain and on an American College of Radiology magnetic resonance accreditation phantom. The squared error and mean squared error were computed in comparing this scheme to common interpolating functions employed in MR consoles and workstations. The mean structure similarity matrix was computed in 2D as a means of qualitative image assessment. Additionally, MRI scans were used for qualitative assessment of the method. This new scheme was consistently more accurate than upsampling each orientation separately and averaging the upsampled data. An efficient new interpolation approach to resolve SG was developed. This scheme effectively fills in the missing data points by using orthogonal plane images. To date, there have been few attempts to combine the information of three MRI plane orientations using brain images. This has specific applications for clinical MRI, functional MRI, diffusion-weighted imaging/diffusion tensor imaging and MR angiography where 2D slice acquisition are used. In these cases, the 2D data can be combined using our method in order to obtain 3D volume.
NASA Astrophysics Data System (ADS)
Đurđević, Boris; Jug, Irena; Jug, Danijel; Vukadinović, Vesna; Bogunović, Igor; Brozović, Bojana; Stipešević, Bojan
2017-04-01
Soil organic matter (SOM) plays crucial role in soil health and productivity and represents one of the key functions for determining soil degradation and soil suitability for crop production. Nowadays, continuing decline of organic matter in soils in agroecosystems, due to inappropriate agricultural practice (burning and removal of crop residue, overgrazing, inappropriate tillage, etc.) and environmental conditions (climate change, extreme weather conditions, erosion) leads to devastating soil degradation processes and decreases soil productivity. The main objectives of this research is to compare three different interpolation methods (Inverse Distance Weighting IDW, Ordinary kriging OK and Empirical Bayesian Kriging EBK) and provide best spatial predictor in order to ensure detailed analysis of the agricultural land in Osijek-Baranja County, Croatia. A number of 9,099 soil samples have been compiled from layer 0-30 cm and analyzed in laboratory. The average value of SOM in the study area was 2.66%, while 70.7 % of samples had SOM value below 3% in Osijek-Baranja County. Among the applied methods, the lowest root mean square error was recorded under Empirical Bayesian Kriging method which had most accurately assessed soil organic matter. The main advantage of EBK is that the process of creating a valid kriging model is automated so the manual parameter adjusting is eliminated, and this resulted with reduced uncertainty of EBK model. Conducted interpolation and visualization of data showed that 85.7% of agricultural land in Osijek-Baranja County has SOM content lower than 3%, which may indicate some sort of soil degradation process. By using interpolation methods combined with visualization of data, we can detect problematic areas much easier and with additional analysis, suggest measures to repair degraded soils. This kind of approach to problem solving in agriculture can be applied on various agroecological conditions and can significantly facilitate and accelerate the decision-making process, and thus directly affect the profitability and sustainability of agricultural production.
High-resolution daily gridded datasets of air temperature and wind speed for Europe
NASA Astrophysics Data System (ADS)
Brinckmann, S.; Krähenmann, S.; Bissolli, P.
2015-08-01
New high-resolution datasets for near surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are hourly SYNOP observations, partly supplemented by station data from the ECA&D dataset (http://www.ecad.eu). These data are quality tested to eliminate erroneous data and various kinds of inhomogeneities. Grids in a resolution of 0.044° (5 km) are derived by spatial interpolation of these station data into the CORDEX area. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al. (2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are chosen for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Explained variance ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 °C and 1-1.5 m s-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The datasets presented in this article are published at http://dx.doi.org/10.5676/DWD_CDC/DECREG0110v1.
NASA Astrophysics Data System (ADS)
Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.
2016-10-01
To visualize the physical processes that occur in the journal bearings of the shafting of power generating turbosets, a technique for preliminary calculation of a set of characteristics of the journal bearings in the domain of possible movements (DPM) of the rotor journals is proposed. The technique is based on interpolation of the oil film characteristics and is designed for use in real-time diagnostic system COMPACS®. According to this technique, for each journal bearing, the domain of possible movement of the shaft journal is computed, then triangulation of the area is performed, and the corresponding mesh is constructed. At each node of the mesh, all characteristics of the journal bearing required by the diagnostic system are calculated. Via shaft-position sensors, the system measures—in the online mode—the instantaneous location of the shaft journal in the bearing and determines the averaged static position of the journals (the pivoting vector). Afterwards, continuous interpolation in the triangulation domain is performed, which allows the real-time calculation of the static and dynamic forces that act on the rotor journal, the flow rate and the temperature of the lubricant, and power friction losses. Use of the proposed method on a running turboset enables diagnosing the technical condition of the shafting support system and promptly identifying the defects that determine the vibrational state and the overall reliability of the turboset. The authors report a number of examples of constructing the DPM and computing the basic static characteristics for elliptical journal bearings typical of large-scale power turbosets. To illustrate the interpolation method, the traditional approach to calculation of bearing properties is applied. This approach is based on a Reynolds two-dimensional isothermal equation that accounts for the mobility of the boundary of the oil film continuity.
Chatterson, Leslie C; Leswick, David A; Fladeland, Derek A; Hunt, Megan M; Webster, Stephen; Lim, Hyun
2014-07-01
Custom bismuth-antimony shields were previously shown to reduce fetal dose by 53% on an 8DR (detector row) CT scanner without dynamic adaptive section collimation (DASC), automatic tube current modulation (ATCM) or adaptive statistical iterative reconstruction (ASiR). The purpose of this study is to compare the effective maternal and average fetal organ dose reduction both with and without bismuth-antimony shields on a 64DR CT scanner using DASC, ATCM and ASiR during maternal CTPA. A phantom with gravid prosthesis and a bismuth-antimony shield were used. Thermoluminescent dosimeters (TLDs) measured fetal radiation dose. The average fetal organ dose and effective maternal dose were determined using 100 kVp, scanning from the lung apices to the diaphragm utilizing DASC, ATCM and ASiR on a 64DR CT scanner with and without shielding in the first and third trimester. Isolated assessment of DASC was done via comparing a new 8DR scan without DASC to a similar scan on the 64DR with DASC. Average third trimester unshielded fetal dose was reduced from 0.22 mGy ± 0.02 on the 8DR to 0.13 mGy ± 0.03 with the conservative 64DR protocol that included 30% ASiR, DASC and ATCM (42% reduction, P<0.01). Use of a shield further reduced average third trimester fetal dose to 0.04 mGy ± 0.01 (69% reduction, P<0.01). The average fetal organ dose reduction attributable to DASC alone was modest (6% reduction from 0.17 mGy ± 0.02 to 0.16 mGy ± 0.02, P=0.014). First trimester fetal organ dose on the 8DR protocol was 0.07 mGy ± 0.03. This was reduced to 0.05 mGy ± 0.03 on the 64DR protocol without shielding (30% reduction, P=0.009). Shields further reduced this dose to below accurately detectable levels. Effective maternal dose was reduced from 4.0 mSv on the 8DR to 2.5 mSv on the 64DR scanner using the conservative protocol (38% dose reduction). ASiR, ATCM and DASC combined significantly reduce effective maternal and fetal organ dose during CTPA. Shields continue to be an effective means of fetal dose reduction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Optimizing radiologist e-prescribing of CT oral contrast agent using a protocoling portal.
Wasser, Elliot J; Galante, Nicholas J; Andriole, Katherine P; Farkas, Cameron; Khorasani, Ramin
2013-12-01
The purpose of this study is to quantify the time expenditure associated with radiologist ordering of CT oral contrast media when using an integrated protocoling portal and to determine radiologists' perceptions of the ordering process. This prospective study was performed at a large academic tertiary care facility. Detailed timing information for CT inpatient oral contrast orders placed via the computerized physician order entry (CPOE) system was gathered over a 14-day period. Analyses evaluated the amount of physician time required for each component of the ordering process. Radiologists' perceptions of the ordering process were assessed by survey. Descriptive statistics and chi-square analysis were performed. A total of 96 oral contrast agent orders were placed by 13 radiologists during the study period. The average time necessary to create a protocol for each case was 40.4 seconds (average range by subject, 20.0-130.0 seconds; SD, 37.1 seconds), and the average total time to create and sign each contrast agent order was 27.2 seconds (range, 10.0-50.0 seconds; SD, 22.4 seconds). Overall, 52.5% (21/40) of survey respondents indicated that radiologist entry of oral contrast agent orders improved patient safety. A minority of respondents (15% [6/40]) indicated that contrast agent order entry was either very or extremely disruptive to workflow. Radiologist e-prescribing of CT oral contrast agents using CPOE can be embedded in a protocol workflow. Integration of health IT tools can help to optimize user acceptance and adoption.
NASA Astrophysics Data System (ADS)
Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.
Cost-effectiveness analysis of PET-CT-guided management for locally advanced head and neck cancer.
Smith, A F; Hall, P S; Hulme, C T; Dunn, J A; McConkey, C C; Rahman, J K; McCabe, C; Mehanna, H
2017-11-01
A recent large United Kingdom (UK) clinical trial demonstrated that positron-emission tomography-computed tomography (PET-CT)-guided administration of neck dissection (ND) in patients with advanced head and neck cancer after primary chemo-radiotherapy treatment produces similar survival outcomes to planned ND (standard care) and is cost-effective over a short-term horizon. Further assessment of long-term outcomes is required to inform a robust adoption decision. Here we present results of a lifetime cost-effectiveness analysis of PET-CT-guided management from a UK secondary care perspective. Initial 6-month cost and health outcomes were derived from trial data; subsequent incidence of recurrence and mortality was simulated using a de novo Markov model. Health benefit was measured in quality-adjusted life years (QALYs) and costs reported in 2015 British pounds. Model parameters were derived from trial data and published literature. Sensitivity analyses were conducted to assess the impact of uncertainty and broader National Health Service (NHS) and personal social services (PSS) costs on the results. PET-CT management produced an average per-person lifetime cost saving of £1485 and an additional 0.13 QALYs. At a £20,000 willingness-to-pay per additional QALY threshold, there was a 75% probability that PET-CT was cost-effective, and the results remained cost-effective over the majority of sensitivity analyses. When adopting a broader NHS and PSS perspective, PET-CT management produced an average saving of £700 and had an 81% probability of being cost-effective. This analysis indicates that PET-CT-guided management is cost-effective in the long-term and supports the case for wide-scale adoption. Copyright © 2017 Elsevier Ltd. All rights reserved.
Piñeiro-Vázquez, A T; Canul-Solis, J R; Alayón-Gamboa, J A; Chay-Canul, A J; Ayala-Burgos, A J; Solorio-Sánchez, F J; Aguilar-Pérez, C F; Ku-Vera, J C
2017-02-01
The aim of the experiment was to assess the effect of condensed tannins (CT) on feed intake, dry matter digestibility, nitrogen balance, supply of microbial protein to the small intestine and energy utilization in cattle fed a basal ration of Pennisetum purpureum grass. Five heifers (Bos taurus × Bos indicus) with an average live weight of 295 ± 19 kg were allotted to five treatments consisting of increasing levels of CT (0, 1, 2, 3 and 4% CT/kg DM) in a 5 × 5 Latin square design. Dry matter intake (DMI) was similar (p > 0.05) between treatments containing 0, 1, 2 and 3% of CT/kg DM and it was reduced (p < 0.05) to 4% CT (5.71 kg DM/day) with respect to that observed with 0% CT (6.65 kg DM/day). Nitrogen balance, purine derivatives excretion in urine, microbial protein synthesis and efficiency of synthesis of microbial nitrogen in the rumen were not affected (p ≥ 0.05) by the increase in the levels of condensed tannins in the ration. Energy loss as CH 4 was on average 2.7% of the gross energy consumed daily. Metabolizable energy intake was 49.06 MJ/day in cattle fed low-quality tropical grass with a DMI of 6.27 kg/day. It is concluded that concentrations of CT between 2 and 3% of DM of ration reduced energy loss as CH 4 by 31.3% and 47.6%, respectively, without affecting intakes of dry and organic matter; however, digestibilities of dry and organic matter are negatively affected. Journal of Animal Physiology and Animal Nutrition © 2016 Blackwell Verlag GmbH.
Mohamed, Abdallah S R; Cardenas, Carlos E; Garden, Adam S; Awan, Musaddiq J; Rock, Crosby D; Westergaard, Sarah A; Brandon Gunn, G; Belal, Abdelaziz M; El-Gowily, Ahmed G; Lai, Stephen Y; Rosenthal, David I; Fuller, Clifton D; Aristophanous, Michalis
2017-08-01
To identify the radio-resistant subvolumes in pretreatment FDG-PET by mapping the spatial location of the origin of tumor recurrence after IMRT for head-and-neck squamous cell cancer to the pretreatment FDG-PET/CT. Patients with local/regional recurrence after IMRT with available FDG-PET/CT and post-failure CT were included. For each patient, both pre-therapy PET/CT and recurrence CT were co-registered with the planning CT (pCT). A 4-mm radius was added to the centroid of mapped recurrence growth target volumes (rGTV's) to create recurrence nidus-volumes (NVs). The overlap between boost-tumor-volumes (BTV) representing different SUV thresholds/margins combinations and NVs was measured. Forty-seven patients were eligible. Forty-two (89.4%) had type A central high dose failure. Twenty-six (48%) of type A rGTVs were at the primary site and 28 (52%) were at the nodal site. The mean dose of type A rGTVs was 71Gy. BTV consisting of 50% of the maximum SUV plus 10mm margin was the best subvolume for dose boosting due to high coverage of primary site NVs (92.3%), low average relative volume to CTV1 (41%), and least average percent voxels outside CTV1 (19%). The majority of loco-regional recurrences originate in the regions of central-high-dose. When correlated with pretreatment FDG-PET, the majority of recurrences originated in an area that would be covered by additional 10mm margin on the volume of 50% of the maximum FDG uptake. Copyright © 2017 Elsevier B.V. All rights reserved.
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm
ERIC Educational Resources Information Center
Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana
2005-01-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Borak, Jordan S.
2008-01-01
Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, A; Boone, J
Purpose: To estimate normalized mean glandular dose values for dedicated breast CT (DgN-CT) using breast CT-derived phantoms and compare to estimations using cylindrical phantoms. Methods: Segmented breast CT (bCT) volume data sets (N=219) were used to measure effective diameter profiles and were grouped into quintiles by volume. The profiles were averaged within each quintile to represent the range of breast sizes found clinically. These profiles were then used to generate five voxelized computational phantoms (V1, V2, V3, V4, V5 for the small to large phantom sizes, respectively), and loaded into the MCNP6 lattice geometry to simulate normalized mean glandular dosemore » coefficients (DgN-CT) using the system specifications of the Doheny-prototype bCT scanner in our laboratory. The DgN-CT coefficients derived from the bCT-derived breast-shaped phantoms were compared to those generated using a simpler cylindrical phantom using a constant volume, and the following constraints: (1) Length=1.5*radius; (2) radius determined at chest wall (Rcw), and (3) radius determined at the phantom center-of-mass (Rcm). Results: The change in Dg-NCT coefficients averaged across all phantom sizes, was - 0.5%, 19.8%, and 1.3%, for constraints 1–3, respectively. This suggests that the cylindrical assumption is a good approximation if the radius is taken at the breast center-of-mass, but using the radius at the chest wall results in an underestimation of the glandular dose. Conclusion: The DgN-CT coefficients for bCT-derived phantoms were compared against the assumption of a cylindrical phantom and proved to be essentially equivalent when the cylinder radius was set to r=1.5/L or Rcm. While this suggests that for dosimetry applications a patient’s breast can be approximated as a cylinder (if the correct radius is applied), this assumes a homogenous composition of breast tissue and the results may be different if the realistic heterogeneous distribution of glandular tissue is considered. Research reported in this paper was supported in part by the National Cancer Institute of the National Institutes of Health under award R01CA181081. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institue of Health.« less
An Earth longwave radiation climate model
NASA Technical Reports Server (NTRS)
Yang, S. K.
1984-01-01
An Earth outgoing longwave radiation (OLWR) climate model was constructed for radiation budget study. Required information is provided by on empirical 100mb water vapor mixing ratio equation of the mixing ratio interpolation scheme. Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesian and the Congo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noid, G; Tai, A; Liu, Y
Purpose: It is desirable to increase CT soft-tissue contrast to improve delineation of tumor target and/or surrounding organs at risk (OAR) in RT planning and delivery guidance. The purpose of this work is to investigate the use of monoenergetic decompositions obtained from dual energy (DE) CT to improve soft-tissue contrast. Methods: CT data were acquired for 5 prostate and 5 pancreas patients and a phantom with a CT Scanner (Definition AS Open, Siemens) using both sequential DE protocols and standard protocols. For the DE protocols, the scanner rapidly performs two acquisitions at 80 kVp and 140 kVp. The CT numbersmore » of soft tissue inserts in the phantom (CTED/Gammex) were measured across the spectrum of available monoenergetic decompositions (40 to 140 keV) and compared to the standard protocol (120 kVp, 0.6 pitch, 18 mGy CTDIvol). Contrast, defined as the difference in the average CT number between target and OAR, was measured for all subjects and compared between the DE and standard protocols. Results: Mono-energetic decompositions of the phantom demonstrate an enhancement of soft-tissue contrast as the energy is decreased. For instance, relative to the 120 kVp scans the Liver ED insert increased in CT number by 25 HU while the adipose ED insert decreased by 50 HU. The lowest energy decompositions featured the highest contrast between target and OAR. For every patient, the contrast increased by decomposing at 40 keV. The average increase in contrast relative to a 120 kVp scan for prostate patients at 40 keV was 25.05±17.28 HU while for pancreas patients it was 19.21±17.39 HU. Conclusion: Low energy monoenergetic decompositions from dual-energy CT substantially increase soft-tissue contrast. At the lowest achievable monoenergetic decompositions the maximum soft-tissue contrast is achieved and the delineation of target and OAR is improved. Thus it is beneficial to use DECT in radiation oncology. Supported by Siemens.« less
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146
Guberina, Nika; Forsting, Michael; Ringelstein, Adrian
2017-06-15
To evaluate the dose-reduction potential with different lens protectors for patients undergoing cranial computed tomography (CT) scans. Eye lens dose was assessed in vitro (α-Al2O3:C thermoluminescence dosemeters) using an Alderson-Rando phantom® in cranial CT protocols at different CT scanners (SOMATOM-Definition-AS+®(CT1) and SOMATOM-Definition-Flash® (CT2)) using two different lens-protection systems (Somatex® (SOM) and Medical Imaging Systems® (MIS)). Summarised percentage of the transmitted photons: (1) CT1 (a) unenhanced CT (nCT) with gantry angulation: SOM = 103%, MIS = 111%; (2) CT2 (a) nCT without gantry angulation: SOM = 81%, MIS = 91%; (b) CT angiography (CTA) with automatic dose-modulation technique: SOM = 39%, MIS = 74%; (c) CTA without dose-modulation technique: SOM = 22%, MIS = 48%; (d) CT perfusion: SOM = 44%, MIS = 69%. SOM showed a higher dose-reduction potential than MIS maintaining equal image quality. Lens-protection systems are most effective in CTA protocols without dose-reduction techniques. Lens-protection systems lower the average eye lens dose during CT scans up to 1/3 (MIS) and 2/3 (SOM), respectively, if the eye lens is exposed to the direct beam of radiation. Considering both the CT protocol and the material of lens protectors, they seem to be mandatory for reducing the radiation exposure of the eye lens. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Hoffman, J; McNitt-Gray, M
Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
NASA Astrophysics Data System (ADS)
Dormer, James D.; Halicek, Martin; Ma, Ling; Reilly, Carolyn M.; Schreibmann, Eduard; Fei, Baowei
2018-02-01
Cardiovascular disease is a leading cause of death in the United States. The identification of cardiac diseases on conventional three-dimensional (3D) CT can have many clinical applications. An automated method that can distinguish between healthy and diseased hearts could improve diagnostic speed and accuracy when the only modality available is conventional 3D CT. In this work, we proposed and implemented convolutional neural networks (CNNs) to identify diseased hears on CT images. Six patients with healthy hearts and six with previous cardiovascular disease events received chest CT. After the left atrium for each heart was segmented, 2D and 3D patches were created. A subset of the patches were then used to train separate convolutional neural networks using leave-one-out cross-validation of patient pairs. The results of the two neural networks were compared, with 3D patches producing the higher testing accuracy. The full list of 3D patches from the left atrium was then classified using the optimal 3D CNN model, and the receiver operating curves (ROCs) were produced. The final average area under the curve (AUC) from the ROC curves was 0.840 +/- 0.065 and the average accuracy was 78.9% +/- 5.9%. This demonstrates that the CNN-based method is capable of distinguishing healthy hearts from those with previous cardiovascular disease.
Variability of dental cone beam CT grey values for density estimations
Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K
2013-01-01
Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537
NASA Astrophysics Data System (ADS)
Joshi, K. D.; Marchant, T. E.; Moore, C. J.
2017-03-01
A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.
Szabo, Bence T; Aksoy, Seçil; Repassy, Gabor; Csomo, Krisztian; Dobo-Nagy, Csaba; Orhan, Kaan
2017-06-09
The aim of this study was to compare the paranasal sinus volumes obtained by manual and semiautomatic imaging software programs using both CT and CBCT imaging. 121 computed tomography (CT) and 119 cone beam computed tomography (CBCT) examinations were selected from the databases of the authors' institutes. The Digital Imaging and Communications in Medicine (DICOM) images were imported into 3-dimensonal imaging software, in which hand mode and semiautomatic tracing methods were used to measure the volumes of both maxillary sinuses and the sphenoid sinus. The determined volumetric means were compared to previously published averages. Isometric CBCT-based volume determination results were closer to the real volume conditions, whereas the non-isometric CT-based volume measurements defined coherently lower volumes. By comparing the 2 volume measurement modes, the values gained from hand mode were closer to the literature data. Furthermore, CBCT-based image measurement results corresponded to the known averages. Our results suggest that CBCT images provide reliable volumetric information that can be depended on for artificial organ construction, and which may aid the guidance of the operator prior to or during the intervention.
Ghosh, Payel; Chandler, Adam G; Altinmakas, Emre; Rong, John; Ng, Chaan S
2016-01-01
The aim of this study was to investigate the feasibility of shuttle-mode computed tomography (CT) technology for body perfusion applications by quantitatively assessing and correcting motion artifacts. Noncontrast shuttle-mode CT scans (10 phases, 2 nonoverlapping bed locations) were acquired from 4 patients on a GE 750HD CT scanner. Shuttling effects were quantified using Euclidean distances (between-phase and between-bed locations) of corresponding fiducial points on the shuttle and reference phase scans (prior to shuttle mode). Motion correction with nonrigid registration was evaluated using sum-of-squares differences and distances between centers of segmented volumes of interest on shuttle and references images. Fiducial point analysis showed an average shuttling motion of 0.85 ± 1.05 mm (between-bed) and 1.18 ± 1.46 mm (between-phase), respectively. The volume-of-interest analysis of the nonrigid registration results showed improved sum-of-squares differences from 2950 to 597, between-bed distance from 1.64 to 1.20 mm, and between-phase distance from 2.64 to 1.33 mm, respectively, averaged over all cases. Shuttling effects introduced during shuttle-mode CT acquisitions can be computationally corrected for body perfusion applications.
Validation of a deformable image registration technique for cone beam CT-based dose verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moteabbed, M., E-mail: mmoteabbed@partners.org; Sharp, G. C.; Wang, Y.
2015-01-15
Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (DIR) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformedmore » CT. The registration quality was assessed as the ability of the DIR method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. PLASTIMATCH, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field lengths decreased from 10.1 to 2.5 mm when CBCT was calibrated prior to registration. The results showed no dependence on the level of bladder filling. In comparison with the dose calculated on the primary deformed CT, differences in mean dose averaged over all organs were 0.2% and 3.9% for dose calculated on the secondary deformed CT with and without CBCT calibration, respectively, and 0.5% for dose calculated directly on the calibrated CBCT, for the full-bladder scenario. Gamma analysis for the distance to agreement of 2 mm and 2% of prescribed dose indicated a pass rate of 100% for both cases involving calibrated CBCT and on average 86% without CBCT calibration. Conclusions: Using deformable registration on the planning CT images to evaluate the IMRT dose based on daily CBCTs was found feasible. The proposed method will provide an accurate dose distribution using planning CT and pretreatment CBCT data, avoiding the additional uncertainties introduced by CBCT inhomogeneity and artifacts. This is a necessary initial step toward future image-guided adaptive radiotherapy of the prostate.« less
SU-E-P-49: Evaluation of Image Quality and Radiation Dose of Various Unenhanced Head CT Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Khan, M; Alapati, K
2015-06-15
Purpose: To evaluate the diagnostic value of various unenhanced head CT protocols and predicate acceptable radiation dose level for head CT exam. Methods: Our retrospective analysis included 3 groups, 20 patients per group, who underwent clinical routine unenhanced adult head CT examination. All exams were performed axially with 120 kVp. Three protocols, 380 mAs without iterative reconstruction and automAs, 340 mAs with iterative reconstruction without automAs, 340 mAs with iterative reconstruction and automAs, were applied on each group patients respectively. The images were reconstructed with H30, J30 for brain window and H60, J70 for bone window. Images acquired with threemore » protocols were randomized and blindly reviewed by three radiologists. A 5 point scale was used to rate each exam The percentage of exam score above 3 and average scores of each protocol were calculated for each reviewer and tissue types. Results: For protocols without automAs, the average scores of bone window with iterative reconstruction were higher than those without iterative reconstruction for each reviewer although the radiation dose was 10 percentage lower. 100 percentage exams were scored 3 or higher and the average scores were above 4 for both brain and bone reconstructions. The CTDIvols are 64.4 and 57.8 mGy of 380 and 340 mAs, respectively. With automAs, the radiation dose varied with head size, resulting in 47.5 mGy average CTDIvol between 39.5 and 56.5 mGy. 93 and 98 percentage exams were scored great than 3 for brain and bone windows, respectively. The diagnostic confidence level and image quality of exams with AutomAs were less than those without AutomAs for each reviewer. Conclusion: According to these results, the mAs was reduced to 300 with automAs OFF for head CT exam. The radiation dose was 20 percentage lower than the original protocol and the CTDIvol was reduced to 51.2 mGy.« less
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan
NASA Astrophysics Data System (ADS)
Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel
2014-09-01
The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (σregistration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (σsignal). The two irregularity measures, \\overline{\\Delta |J|} and σregistration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and σregistration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42 mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16 mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.
Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.
Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong
2011-01-01
Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.
Deformable planning CT to cone-beam CT image registration in head-and-neck cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou Jidong; Guerrero, Mariana; Chen, Wenjuan
2011-04-15
Purpose: The purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT. Methods: Twelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine knownmore » anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated. Results: The mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6{+-}0.6 mm. The mean TRE for bony tissue targets was 2.4{+-}0.2 mm, while the mean TRE for soft tissue targets was 2.8{+-}0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2{+-}4.6%, which is consistent with that reported in previous studies. Conclusions: The authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Y; Turian, J; Templeton, A
Purpose: PET/CT provides important functional information for radiotherapy targeting of cervical cancer. However, repeated PET/CT procedures for external beam and subsequent brachytherapy expose patients to additional radiation and are not cost effective. Our goal is to investigate the possibility of propagating PET-active volumes for brachytherapy procedures through deformable image registration (DIR) of earlier PET/CT and ultimately to minimize the number of PET/CT image sessions required. Methods: Nine cervical cancer patients each received their brachytherapy preplanning PET/CT at the end of EBRT with a Syed template in place. The planning PET/CT was acquired on the day of brachytherapy treatment with themore » actual applicator (Syed or Tandem and Ring) and rigidly registered. The PET/CT images were then deformably registered creating a third (deformed) image set for target prediction. Regions of interest with standardized uptake values (SUV) greater than 65% of maximum SUV were contoured as target volumes in all three sets of PET images. The predictive value of the registered images was evaluated by comparing the preplanning and deformed PET volumes with the planning PET volume using Dice's coefficient (DC) and center-of-mass (COM) displacement. Results: The average DCs were 0.12±0.14 and 0.19±0.16 for rigid and deformable predicted target volumes, respectively. The average COM displacements were 1.9±0.9 cm and 1.7±0.7 cm for rigid and deformable registration, respectively. The DCs were improved by deformable registration, however, both were lower than published data for DIR in other modalities and clinical sites. Anatomical changes caused by different brachytherapy applicators could have posed a challenge to the DIR algorithm. The physiological change from interstitial needle placement may also contribute to lower DC. Conclusion: The clinical use of DIR in PET/CT for cervical cancer brachytherapy appears to be limited by applicator choice and requires further investigation.« less
SU-E-J-101: Improved CT to CBCT Deformable Registration Accuracy by Incorporating Multiple CBCTs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godley, A; Stephans, K; Olsen, L Sheplan
2015-06-15
Purpose: Combining prior day CBCT contours with STAPLE was previously shown to improve automated prostate contouring. These accurate STAPLE contours are now used to guide the planning CT to pre-treatment CBCT deformable registration. Methods: Six IGRT prostate patients with daily kilovoltage CBCT had their original planning CT and 9 CBCTs contoured by the same physician. These physician contours for the planning CT and each prior CBCT are deformed to match the current CBCT anatomy, producing multiple contour sets. These sets are then combined using STAPLE into one optimal set (e.g. for day 3 CBCT, combine contours produced using the planmore » plus day 1 and 2 CBCTs). STAPLE computes a probabilistic estimate of the true contour from this collection of contours by maximizing sensitivity and specificity. The deformation field from planning CT to CBCT registration is then refined by matching its deformed contours to the STAPLE contours. ADMIRE (Elekta Inc.) was used for this. The refinement does not force perfect agreement of the contours, typically Dice’s Coefficient (DC) of > 0.9 is obtained, and the image difference metric remains in the optimization of the deformable registration. Results: The average DC between physician delineated CBCT contours and deformed planning CT contours for the bladder, rectum and prostate was 0.80, 0.79 and 0.75, respectively. The accuracy significantly improved to 0.89, 0.84 and 0.84 (P<0.001 for all) when using the refined deformation field. The average time to run STAPLE with five scans and refine the planning CT deformation was 66 seconds on a Telsa K20c GPU. Conclusion: Accurate contours generated from multiple CBCTs provided guidance for CT to CBCT deformable registration, significantly improving registration accuracy as measured by contour DC. A more accurate deformation field is now available for transferring dose or electron density to the CBCT for adaptive planning. Research grant from Elekta.« less
NASA Astrophysics Data System (ADS)
Castillo, Richard; Castillo, Edward; McCurdy, Matthew; Gomez, Daniel R.; Block, Alec M.; Bergsma, Derek; Joy, Sarah; Guerrero, Thomas
2012-04-01
To determine the spatial overlap agreement between four-dimensional computed tomography (4D CT) ventilation and single photon emission computed tomography (SPECT) perfusion hypo-functioning pulmonary defect regions in a patient population with malignant airway stenosis. Treatment planning 4D CT images were obtained retrospectively for ten lung cancer patients with radiographically demonstrated airway obstruction due to gross tumor volume. Each patient also received a SPECT perfusion study within one week of the planning 4D CT, and prior to the initiation of treatment. Deformable image registration was used to map corresponding lung tissue elements between the extreme component phase images, from which quantitative three-dimensional (3D) images representing the local pulmonary specific ventilation were constructed. Semi-automated segmentation of the percentile perfusion distribution was performed to identify regional defects distal to the known obstructing lesion. Semi-automated segmentation was similarly performed by multiple observers to delineate corresponding defect regions depicted on 4D CT ventilation. Normalized Dice similarity coefficient (NDSC) indices were determined for each observer between SPECT perfusion and 4D CT ventilation defect regions to assess spatial overlap agreement. Tidal volumes determined from 4D CT ventilation were evaluated versus measurements obtained from lung parenchyma segmentation. Linear regression resulted in a linear fit with slope = 1.01 (R2 = 0.99). Respective values for the average DSC, NDSC1 mm and NDSC2 mm for all cases and multiple observers were 0.78, 0.88 and 0.99, indicating that, on average, spatial overlap agreement between ventilation and perfusion defect regions was comparable to the threshold for agreement within 1-2 mm uncertainty. Corresponding coefficients of variation for all metrics were similarly in the range: 0.10%-19%. This study is the first to quantitatively assess 3D spatial overlap agreement between clinically acquired SPECT perfusion and specific ventilation from 4D CT. Results suggest high correlation between methods within the sub-population of lung cancer patients with malignant airway stenosis.
Sarno, Antonio; Mettivier, Giovanni; Tucciariello, Raffaele M; Bliznakova, Kristina; Boone, John M; Sechopoulos, Ioannis; Di Lillo, Francesca; Russo, Paolo
2018-06-07
In cone-beam computed tomography dedicated to the breast (BCT), the mean glandular dose (MGD) is the dose metric of reference, evaluated from the measured air kerma by means of normalized glandular dose coefficients (DgN CT ). This work aimed at computing, for a simple breast model, a set of DgN CT values for monoenergetic and polyenergetic X-ray beams, and at validating the results vs. those for patient specific digital phantoms from BCT scans. We developed a Monte Carlo code for calculation of monoenergetic DgN CT coefficients (energy range 4.25-82.25 keV). The pendant breast was modelled as a cylinder of a homogeneous mixture of adipose and glandular tissue with glandular fractions by mass of 0.1%, 14.3%, 25%, 50% or 100%, enveloped by a 1.45 mm-thick skin layer. The breast diameter ranged between 8 cm and 18 cm. Then, polyenergetic DgN CT coefficients were analytically derived for 49-kVp W-anode spectra (half value layer 1.25-1.50 mm Al), as in a commercial BCT scanner. We compared the homogeneous models to 20 digital phantoms produced from classified 3D breast images. Polyenergetic DgN CT resulted 13% lower than most recent published data. The comparison vs. patient specific breast phantoms showed that the homogeneous cylindrical model leads to a DgN CT percentage difference between -15% and +27%, with an average overestimation of 8%. A dataset of monoenergetic and polyenergetic DgN CT coefficients for BCT was provided. Patient specific breast models showed a different volume distribution of glandular dose and determined a DgN CT 8% lower, on average, than homogeneous breast model. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Integrating TITAN2D Geophysical Mass Flow Model with GIS
NASA Astrophysics Data System (ADS)
Namikawa, L. M.; Renschler, C.
2005-12-01
TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.
Dewland, Thomas A; Wintermark, Max; Vaysman, Anna; Smith, Lisa M; Tong, Elizabeth; Vittinghoff, Eric; Marcus, Gregory M
2013-01-01
Left atrial (LA) tissue characteristics may play an important role in atrial fibrillation (AF) induction and perpetuation. Although frequently used in clinical practice, computed tomography (CT) has not been employed to describe differences in LA wall properties between AF patients and controls. We sought to noninvasively characterize AF-associated differences in LA tissue using CT. CT images of the LA were obtained in 98 consecutive patients undergoing AF ablation and in 89 controls. A custom software algorithm was used to measure wall thickness and density in four prespecified regions of the LA. On average, LA walls were thinner (-15.5%, 95% confidence interval [CI] -23.2 to -7.8%, P < 0.001) and demonstrated significantly lower density (-19.7 Hounsfield Units [HU], 95% CI -27.0 to -12.5 HU, P < 0.001) in AF patients compared to controls. In linear mixed models adjusting for demographics, clinical variables, and other CT measurements, the average LA, interatrial septum, LA appendage, and anterior walls remained significantly thinner in AF patients. After adjusting for the same potential confounders, history of AF was associated with reduced density in the LA anterior wall and increased density below the right inferior pulmonary vein and in the LA appendage. Application of an automated measurement algorithm to CT imaging of the atrium identified significant thinning of the LA wall and regional alterations in tissue density in patients with a history of AF. These findings suggest differences in LA tissue composition can be noninvasively identified and quantified using CT. ©2012, The Authors. Journal compilation ©2012 Wiley Periodicals, Inc.
Contrast Enhancement of the Right Ventricle during Coronary CT Angiography--Is It Necessary?
Kok, Madeleine; Kietselaer, Bas L J H; Mihl, Casper; Altintas, Sibel; Nijssen, Estelle C; Wildberger, Joachim E; Das, Marco
2015-01-01
It is unclear if prolonged contrast media injection, to improve right ventricular visualization during coronary CT angiography, leads to increased detection of right ventricle pathology. The purpose of this study was to evaluate right ventricle enhancement and subsequent detection of right ventricle disease during coronary CT angiography. 472 consecutive patients referred for screening coronary CT angiography were retrospectively evaluated. Every patient underwent multidetector-row CT of the coronary arteries: 128x 0.6mm coll., 100-120kV, rot. time 0.28s, ref. mAs 350 and received an individualized (P3T) contrast bolus injection of iodinated contrast medium (300 mgI/ml). Patient data were analyzed to assess right ventricle enhancement (HU) and right ventricle pathology. Image quality was defined good when right ventricle enhancement >200HU, moderate when 140-200HU and poor when <140HU. Good image quality was found in 372 patients, moderate in 80 patients and poor in 20 patients. Mean enhancement of the right ventricle cavity was 268HU±102. Patients received an average bolus of 108±24 ml at an average peak flow rate of 6.1±2.2 ml/s. In only three out of 472 patients (0.63%) pathology of the right ventricle was found (dilatation) No other right ventricle pathology was detected. Right ventricle pathology was detected in three out of 472 patients; the dilatation observed in these three cases may have been picked up even without dedicated enhancement of the right ventricle. Based on our findings, right ventricle enhancement can be omitted during screening coronary CT angiography.
Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.
2010-08-01
Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less
Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F
2012-05-01
Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
Cheng, Christopher P; Parker, David; Taylor, Charles A
2002-09-01
Arterial wall shear stress is hypothesized to be an important factor in the localization of atherosclerosis. Current methods to compute wall shear stress from magnetic resonance imaging (MRI) data do not account for flow profiles characteristic of pulsatile flow in noncircular vessel lumens. We describe a method to quantify wall shear stress in large blood vessels by differentiating velocity interpolation functions defined using cine phase-contrast MRI data on a band of elements in the neighborhood of the vessel wall. Validation was performed with software phantoms and an in vitro flow phantom. At an image resolution corresponding to in vivo imaging data of the human abdominal aorta, time-averaged, spatially averaged wall shear stress for steady and pulsatile flow were determined to be within 16% and 23% of the analytic solution, respectively. These errors were reduced to 5% and 8% with doubling in image resolution. For the pulsatile software phantom, the oscillation in shear stress was predicted to within 5%. The mean absolute error of circumferentially resolved shear stress for the nonaxisymmetric phantom decreased from 28% to 15% with a doubling in image resolution. The irregularly shaped phantom and in vitro investigation demonstrated convergence of the calculated values with increased image resolution. We quantified the shear stress at the supraceliac and infrarenal regions of a human abdominal aorta to be 3.4 and 2.3 dyn/cm2, respectively.
Specific features of the flow structure in a reactive type turbine stage
NASA Astrophysics Data System (ADS)
Chernikov, V. A.; Semakina, E. Yu.
2017-04-01
The results of experimental studies of the gas dynamics for a reactive type turbine stage are presented. The objective of the studies is the measurement of the 3D flow fields in reference cross sections, experimental determination of the stage characteristics, and analysis of the flow structure for detecting the sources of kinetic energy losses. The integral characteristics of the studied stage are obtained by averaging the results of traversing the 3D flow over the area of the reference cross sections before and behind the stage. The averaging is performed using the conservation equations for mass, total energy flux, angular momentum with respect to the axis z of the turbine, entropy flow, and the radial projection of the momentum flux equation. The flow parameter distributions along the channel height behind the stage are obtained in the same way. More thorough analysis of the flow structure is performed after interpolation of the experimentally measured point parameter values and 3D flow velocities behind the stage. The obtained continuous velocity distributions in the absolute and relative coordinate systems are presented in the form of vector fields. The coordinates of the centers and the vectors of secondary vortices are determined using the results of point measurements of velocity vectors in the cross section behind the turbine stage and their subsequent interpolation. The approach to analysis of experimental data on aerodynamics of the turbine stage applied in this study allows one to find the detailed space structure of the working medium flow, including secondary coherent vortices at the root and peripheral regions of the air-gas part of the stage. The measured 3D flow parameter fields and their interpolation, on the one hand, point to possible sources of increased power losses, and, on the other hand, may serve as the basis for detailed testing of CFD models of the flow using both integral and local characteristics. The comparison of the numerical and experimental results, as regards local characteristics, using statistical methods yields the quantitative estimate of their agreement.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media
2015-09-24
TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR
Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry
NASA Astrophysics Data System (ADS)
Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.
2017-02-01
The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms, respectively. Comparison with results calculated using proprietary data for the PHILIPS Brilliance 64 scanner showed agreement on average within 2.5% (max. 5.8%) and with data measured for that scanner within 2.1% (max. 3.7%). Ratios of CTDI100,c/CTDI100, p for this study and corresponding data published by CTDosimetry (www.impactscan.org) agree on average within about 11% (max. 28.6%). Lateral dose profiles calculated with the proposed bowtie filter and with proprietary data agreed within 2% (max. 5.9%), and both calculated data agreed within 5.4% (max. 11.2%) with measured results. Application of the proposed bowtie filter and of the exactly modelled filter to human phantom Monte Carlo calculations show agreement on the average within less than 5% (max. 7.9%) for organ and tissue absorbed doses.
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
Enhancement of panoramic image resolution based on swift interpolation of Bezier surface
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Yang, Guo-guang; Bai, Jian
2007-01-01
Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.
Interpolation problem for the solutions of linear elasticity equations based on monogenic functions
NASA Astrophysics Data System (ADS)
Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii
2017-11-01
Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.
Moore, Christopher L.; Daniels, Brock; Singh, Dinesh; Luty, Seth; Gunabushanam, Gowthaman; Ghita, Monica; Molinaro, Annette; Gross, Cary P.
2016-01-01
Purpose To determine if a reduced-dose computed tomography (CT) protocol could effectively help to identify patients in the emergency department (ED) with moderate to high likelihood of calculi who would require urologic intervention within 90 days. Materials and Methods The study was approved by the institutional review board and written informed consent with HIPAA authorization was obtained. This was a prospective, single-center study of patients in the ED with moderate to high likelihood of ureteral stone undergoing CT imaging. Objective likelihood of ureteral stone was determined by using the previously derived and validated STONE clinical prediction rule, which includes five elements: sex, timing, origin, nausea, and erythrocytes. All patients with high STONE score (STONE score, 10–13) underwent reduced-dose CT, while those with moderate likelihood of ureteral stone (moderate STONE score, 6–9) underwent reduced-dose CT or standard CT based on clinician discretion. Patients were followed to 90 days after initial imaging for clinical course and for the primary outcome of any intervention. Statistics are primarily descriptive and are reported as percentages, sensitivities, and specificities with 95% confidence intervals. Results There were 264 participants enrolled and 165 reduced-dose CTs performed; of these participants, 108 underwent reduced-dose CT alone with complete follow-up. Overall, 46 of 264 (17.4%) of patients underwent urologic intervention, and 25 of 108 (23.1%) patients who underwent reduced-dose CT underwent a urologic intervention; all were correctly diagnosed on the clinical report of the reduced-dose CT (sensitivity, 100%; 95% confidence interval: 86.7%, 100%). The average dose-length product for all standard-dose CTs was 857 mGy · cm ± 395 compared with 101 mGy · cm ± 39 for all reduced-dose CTs (average dose reduction, 88.2%). There were five interventions for nonurologic causes, three of which were urgent and none of which were missed when reduced-dose CT was performed. Conclusion A CT protocol with over 85% dose reduction can be used in patients with moderate to high likelihood of ureteral stone to safely and effectively identify patients in the ED who will require urologic intervention. PMID:26943230
Funk, Chris; Peterson, Pete; Landsfeld, Martin; Pedreros, Diego; Verdin, James; Shukla, Shraddhanand; Husak, Gregory; Rowland, James; Harrison, Laura; Hoell, Andrew; Michaelsen, Joel
2015-01-01
The Climate Hazards group Infrared Precipitation with Stations (CHIRPS) dataset builds on previous approaches to ‘smart’ interpolation techniques and high resolution, long period of record precipitation estimates based on infrared Cold Cloud Duration (CCD) observations. The algorithm i) is built around a 0.05° climatology that incorporates satellite information to represent sparsely gauged locations, ii) incorporates daily, pentadal, and monthly 1981-present 0.05° CCD-based precipitation estimates, iii) blends station data to produce a preliminary information product with a latency of about 2 days and a final product with an average latency of about 3 weeks, and iv) uses a novel blending procedure incorporating the spatial correlation structure of CCD-estimates to assign interpolation weights. We present the CHIRPS algorithm, global and regional validation results, and show how CHIRPS can be used to quantify the hydrologic impacts of decreasing precipitation and rising air temperatures in the Greater Horn of Africa. Using the Variable Infiltration Capacity model, we show that CHIRPS can support effective hydrologic forecasts and trend analyses in southeastern Ethiopia.
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
Funk, Chris; Peterson, Pete; Landsfeld, Martin; Pedreros, Diego; Verdin, James; Shukla, Shraddhanand; Husak, Gregory; Rowland, James; Harrison, Laura; Hoell, Andrew; Michaelsen, Joel
2015-01-01
The Climate Hazards group Infrared Precipitation with Stations (CHIRPS) dataset builds on previous approaches to ‘smart’ interpolation techniques and high resolution, long period of record precipitation estimates based on infrared Cold Cloud Duration (CCD) observations. The algorithm i) is built around a 0.05° climatology that incorporates satellite information to represent sparsely gauged locations, ii) incorporates daily, pentadal, and monthly 1981-present 0.05° CCD-based precipitation estimates, iii) blends station data to produce a preliminary information product with a latency of about 2 days and a final product with an average latency of about 3 weeks, and iv) uses a novel blending procedure incorporating the spatial correlation structure of CCD-estimates to assign interpolation weights. We present the CHIRPS algorithm, global and regional validation results, and show how CHIRPS can be used to quantify the hydrologic impacts of decreasing precipitation and rising air temperatures in the Greater Horn of Africa. Using the Variable Infiltration Capacity model, we show that CHIRPS can support effective hydrologic forecasts and trend analyses in southeastern Ethiopia. PMID:26646728
Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
Maury, Augusto; Revilla, Reynier I
2015-08-01
Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
Quasi interpolation with Voronoi splines.
Mirzargar, Mahsa; Entezari, Alireza
2011-12-01
We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE
Technical note: estimating absorbed doses to the thyroid in CT.
Huda, Walter; Magill, Dennise; Spampinato, Maria V
2011-06-01
To describe a method for estimating absorbed doses to the thyroid in patients undergoing neck CT examinations. Thyroid doses in anthropomorphic phantoms were obtained for all 23 scanner dosimetry data sets in the ImPACT CT patient dosimetry calculator. Values of relative thyroid dose [R(thy)(L)], defined as the thyroid dose for a given scan length (L) divided by the corresponding thyroid dose for a whole body scan, were determined for neck CT scans. Ratios of the maximum thyroid dose to the corresponding CTDI(vol) and [D'(thy)], were obtained for two phantom diameters. The mass-equivalent water cylinder of any patient can be derived from the neck cross-sectional area and the corresponding average Hounsfield Unit, and compared to the 16.5-cm diameter water cylinder that models the ImPACT anthropomorphic phantom neck. Published values of relative doses in water cylinders of varying diameter were used to adjust thyroid doses in the anthropomorphic phantom to those of any sized patient. Relative thyroid doses R(thy)(L) increase to unity with increasing scan length and with very small difference between scanners. A 10-cm scan centered on the thyroid would result in a dose that is, nearly 90% of the thyroid dose from a whole body scan when performed using the constant radiographic techniques. At 120 kV, the average value of D'(thy) for the 16-cm diameter was 1.17 +/- 0.05 and was independent of CT vendor and year of CT scanner, and choice of x-ray tube voltage. The corresponding average value of D'(thy) in the 32-cm diameter phantom was 2.28 +/- 0.22 and showed marked variations depending on vendor, year of introduction into clinical practice as well as x-ray tube voltage. At 120 kV, a neck equivalent to a 10-cm diameter cylinder of water would have thyroid doses 36% higher than those in the ImPACT phantom, whereas a neck equivalent to a 25-cm cylinder diameter would have thyroid doses 35% lower. Patient thyroid doses can be estimated by taking into account the amount of radiation used to perform the CT examination (CTDI(vol)) and accounting for scan length and patient anatomy (i.e., neck diameter) at the thyroid location.
Imaging a moving lung tumor with megavoltage cone beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayou, Olivier, E-mail: ogayou@wpahs.org; Colonias, Athanasios
2015-05-15
Purpose: Respiratory motion may affect the accuracy of image guidance of radiation treatment of lung cancer. A cone beam computed tomography (CBCT) image spans several breathing cycles, resulting in a blurred object with a theoretical size equal to the sum of tumor size and breathing motion. However, several factors may affect this theoretical relationship. The objective of this study was to analyze the effect of tumor motion on megavoltage (MV)-CBCT images, by comparing target sizes on simulation and pretreatment images of a large cohort of lung cancer patients. Methods: Ninety-three MV-CBCT images from 17 patients were analyzed. Internal target volumesmore » were contoured on each MV-CBCT dataset [internal target volume (ITV{sub CB})]. Their extent in each dimension was compared to that of two volumes contoured on simulation 4-dimensional computed tomography (4D-CT) images: the combination of the tumor contours of each phase of the 4D-CT (ITV{sub 4D}) and the volume contoured on the average CT calculated from the 4D-CT phases (ITV{sub ave}). Tumor size and breathing amplitude were assessed by contouring the tumor on each CBCT raw projection where it could be unambiguously identified. The effect of breathing amplitude on the quality of the MV-CBCT image reconstruction was analyzed. Results: The mean differences between the sizes of ITV{sub CB} and ITV{sub 4D} were −1.6 ± 3.3 mm (p < 0.001), −2.4 ± 3.1 mm (p < 0.001), and −7.2 ± 5.3 mm (p < 0.001) in the anterior/posterior (AP), left/right (LR), and superior/inferior (SI) directions, respectively, showing that MV-CBCT underestimates the full target size. The corresponding mean differences between ITV{sub CB} and ITV{sub ave} were 0.3 ± 2.6 mm (p = 0.307), 0.0 ± 2.4 mm (p = 0.86), and −4.0 ± 4.3 mm (p < 0.001), indicating that the average CT image is more representative of what is visible on MV-CBCT in the AP and LR directions. In the SI directions, differences between ITV{sub CB} and ITV{sub ave} could be separated into two groups based on tumor motion: −3.2 ± 3.2 mm for tumor motion less than 15 mm and −10.9 ± 6.3 mm for tumor motion greater than 15 mm. Deviations of measured target extents from their theoretical values derived from tumor size and motion were correlated with motion amplitude similarly for both MV-CBCT and average CT images, suggesting that the two images were subject to similar motion artifacts for motion less than 15 mm. Conclusions: MV-CBCT images are affected by tumor motion and tend to under-represent the full target volume. For tumor motion up to 15 mm, the volume contoured on average CT is comparable to that contoured on the MV-CBCT. Therefore, the average CT should be used in image registration for localization purposes, and the standard 5 mm PTV margin seems adequate. For tumor motion greater than 15 mm, an additional setup margin may need to be used to account for the increased uncertainty in tumor localization.« less
Automatic mediastinal lymph node detection in chest CT
NASA Astrophysics Data System (ADS)
Feuerstein, Marco; Deguchi, Daisuke; Kitasaka, Takayuki; Iwano, Shingo; Imaizumi, Kazuyoshi; Hasegawa, Yoshinori; Suenaga, Yasuhito; Mori, Kensaku
2009-02-01
Computed tomography (CT) of the chest is a very common staging investigation for the assessment of mediastinal, hilar, and intrapulmonary lymph nodes in the context of lung cancer. In the current clinical workflow, the detection and assessment of lymph nodes is usually performed manually, which can be error-prone and timeconsuming. We therefore propose a method for the automatic detection of mediastinal, hilar, and intrapulmonary lymph node candidates in contrast-enhanced chest CT. Based on the segmentation of important mediastinal anatomy (bronchial tree, aortic arch) and making use of anatomical knowledge, we utilize Hessian eigenvalues to detect lymph node candidates. As lymph nodes can be characterized as blob-like structures of varying size and shape within a specific intensity interval, we can utilize these characteristics to reduce the number of false positive candidates significantly. We applied our method to 5 cases suspected to have lung cancer. The processing time of our algorithm did not exceed 6 minutes, and we achieved an average sensitivity of 82.1% and an average precision of 13.3%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yakun; Li, Xiang; Segars, W. Paul
2014-02-15
Purpose: Given the radiation concerns inherent to the x-ray modalities, accurately estimating the radiation doses that patients receive during different imaging modalities is crucial. This study estimated organ doses, effective doses, and risk indices for the three clinical chest x-ray imaging techniques (chest radiography, tomosynthesis, and CT) using 59 anatomically variable voxelized phantoms and Monte Carlo simulation methods. Methods: A total of 59 computational anthropomorphic male and female extended cardiac-torso (XCAT) adult phantoms were used in this study. Organ doses and effective doses were estimated for a clinical radiography system with the capability of conducting chest radiography and tomosynthesis (Definiummore » 8000, VolumeRAD, GE Healthcare) and a clinical CT system (LightSpeed VCT, GE Healthcare). A Monte Carlo dose simulation program (PENELOPE, version 2006, Universitat de Barcelona, Spain) was used to mimic these two clinical systems. The Duke University (Durham, NC) technique charts were used to determine the clinical techniques for the radiographic modalities. An exponential relationship between CTDI{sub vol} and patient diameter was used to determine the absolute dose values for CT. The simulations of the two clinical systems compute organ and tissue doses, which were then used to calculate effective dose and risk index. The calculation of the two dose metrics used the tissue weighting factors from ICRP Publication 103 and BEIR VII report. Results: The average effective dose of the chest posteroanterior examination was found to be 0.04 mSv, which was 1.3% that of the chest CT examination. The average effective dose of the chest tomosynthesis examination was found to be about ten times that of the chest posteroanterior examination and about 12% that of the chest CT examination. With increasing patient average chest diameter, both the effective dose and risk index for CT increased considerably in an exponential fashion, while these two dose metrics only increased slightly for radiographic modalities and for chest tomosynthesis. Effective and organ doses normalized to mAs all illustrated an exponential decrease with increasing patient size. As a surface organ, breast doses had less correlation with body size than that of lungs or liver. Conclusions: Patient body size has a much greater impact on radiation dose of chest CT examinations than chest radiography and tomosynthesis. The size of a patient should be considered when choosing the best thoracic imaging modality.« less
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard
2014-06-01
The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, T; Koo, T
Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
A Lagrangian dynamic subgrid-scale model turbulence
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Cabot, W.
1994-01-01
A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, X; Schott, D; Song, Y
Purpose: In an effort of early assessment of treatment response, we investigate radiation induced changes in CT number histogram of GTV during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Diagnostic-quality CT data acquired daily during routine CT-guided CRT using a CT-on-rails for 20 pancreatic head cancer patients were analyzed. All patients were treated with a radiation dose of 50.4 in 28 fractions. On each daily CT set, the contours of the pancreatic head and the spinal cord were delineated. The Hounsfiled Units (HU) histogram in these contourswere extracted and processed using MATLAB. Eight parameters of the histogrammore » including the mean HU over all the voxels, peak position, volume, standard deviation (SD), skewness, kurtosis, energy, and entropy were calculated for each fraction. The significances were inspected using paired two-tailed t-test and the correlations were analyzed using Spearman rank correlation tests. Results: In general, HU histogram in pancreatic head (but not in spinal cord) changed during the CRT delivery. Changes from the first to the last fraction in mean HU in pancreatic head ranged from −13.4 to 3.7 HU with an average of −4.4 HU, which was significant (P<0.001). Among other quantities, the volume decreased, the skewness increased (less skewed), and the kurtosis decreased (less sharp) during the CRT delivery. The changes of mean HU, volume, skewness, and kurtosis became significant after two weeks of treatment. Patient pathological response status is associated with the changes of SD (ΔSD), i.e., ΔSD= 1.85 (average of 7 patients) for good reponse, −0.08 (average of 6 patients) for moderate and poor response. Conclusion: Significant changes in HU histogram and the histogram-based metrics (e.g., meam HU, skewness, and kurtosis) in tumor were observed during the course of chemoradiation therapy for pancreas cancer. These changes may be potentially used for early assessment of treatment response.« less
SU-E-I-27: Estimating KERMA Area Product for CT Localizer Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogden, K; Greene-Donnelly, K; Bennett, R
2015-06-15
Purpose: To estimate the free-in-air KERMA-Area Product (KAP) incident on patients due to CT localizer scans for common CT exams. Methods: In-plane beam intensity profiles were measured in localizer acquisition mode using OSLs for a 64 slice MDCT scanner (Lightspeed VCT, GE Medical Systems, Waukesha WI). The z-axis beam width was measured as a function of distance from isocenter. The beam profile and width were used to calculate a weighted average air KERMA per unit mAs as a function of intercepted x-axis beam width for objects symmetric about the localizer centerline.Patient areas were measured using manually drawn regions and dividedmore » by localizer length to determine average width. Data were collected for 50 head exams (lateral localizer only), 15 head/neck exams, 50 chest exams, and 50 abdomen/pelvis exams. Mean patient widths and acquisition techniques were used to calculate the weighted average free-in-air KERMA, which was multiplied by the patient area to estimate KAP. Results: Scan technique was 120 kV tube voltage, 10 mA current, and table speed of 10 cm/s. The mean ± standard deviation values of KAP were 120 ± 11.6, 469 ± 62.6, 518 ± 45, and 763 ± 93 mGycm{sup 2} for head, head/neck, chest, and abdomen/pelvis exams, respectively. For studies with AP and lateral localizers, the AP/lateral area ratio was 1.20, 1.33, and 1.24 for the head/neck, chest, and abdomen/pelvis exams, respectively. However, the AP/lateral KAP ratios were 1.12, 1.08, and 1.07, respectively. Conclusion: Calculation of KAP in CT localizers is complicated by the non-uniform intensity profile and z-axis beam width. KAP values are similar to those for simple radiographic exams such as a chest radiograph and represent a small fraction of the x-ray exposure at CT. However, as CT doses are reduced the localizer contribution will be a more significant fraction of the total exposure.« less
SU-E-J-221: A Novel Expansion Method for MRI Based Target Delineation in Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, B; East Carolina University, Greenville, NC; Feng, Y
Purpose: To compare a novel bladder/rectum carveout expansion method on MRI delineated prostate to standard CT and expansion based methods for maintaining prostate coverage while providing superior bladder and rectal sparing. Methods: Ten prostate cases were planned to include four trials: MRI vs CT delineated prostate/proximal seminal vesicles, and each image modality compared to both standard expansions (8mm 3D expansion and 5mm posterior, i.e. ∼8mm) and carveout method expansions (5mm 3D expansion, 4mm posterior for GTV-CTV excluding expansion into bladder/rectum followed by additional 5mm 3D expansion to PTV, i.e. ∼1cm). All trials were planned to total dose 7920 cGy viamore » IMRT. Evaluation and comparison was made using the following criteria: QUANTEC constraints for bladder/rectum including analysis of low dose regions, changes in PTV volume, total control points, and maximum hot spot. Results: ∼8mm MRI expansion consistently produced the most optimal plan with lowest total control points and best bladder/rectum sparing. However, this scheme had the smallest prostate (average 22.9% reduction) and subsequent PTV volume, consistent with prior literature. ∼1cm MRI had an average PTV volume comparable to ∼8mm CT at 3.79% difference. Bladder QUANTEC constraints were on average less for the ∼1cm MRI as compared to the ∼8mm CT and observed as statistically significant with 2.64% reduction in V65. Rectal constraints appeared to follow the same trend. Case-by-case analysis showed variation in rectal V30 with MRI delineated prostate being most favorable regardless of expansion type. ∼1cm MRI and ∼8mm CT had comparable plan quality. Conclusion: MRI delineated prostate with standard expansions had the smallest PTV leading to margins that may be too tight. Bladder/rectum carveout expansion method on MRI delineated prostate was found to be superior to standard CT based methods in terms of bladder and rectal sparing while maintaining prostate coverage. Continued investigation is warranted for further validation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Q; Stanford University School of Medicine, Stanford, CA; Liu, H
Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channelmore » and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral CT. This work is partially supported by the National Natural Science Foundation of China (No. 61302136), and the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2014JQ8317).« less
Ando, Kei; Imagama, Shiro; Ito, Zenya; Kobayashi, Kazuyoshi; Ukai, Junichi; Muramoto, Akio; Shinjo, Ryuichi; Matsumoto, Tomohiro; Nakashima, Hiroaki; Ishiguro, Naoki
2014-05-01
Retrospective clinical study. To investigate, using multislice CT images, how thoracic ossification of the posterior longitudinal ligament (OPLL) changes with time after thoracic posterior fusion surgery. Few studies have evaluated thoracic OPLL preoperatively and post using computed tomography (CT). The subjects included 19 patients (7 men and 12 women) with an average age at surgery of 52 years (38-66 y) who underwent indirect posterior decompression with corrective fusion and instrumentation at our institute. Minimum follow-up period was 1 year, and averaged 3 years 10 months (12-120 mo). Using CT images, we investigated fusion range, preoperative and postoperative Cobb angles of thoracic fusion levels, intraoperative and postoperative blood loss, operative time, hyperintense areas on preoperative MRI of thoracic spine and thickness of the OPLL on the reconstructed sagittal, multislice CT images taken before the operation and at 3 months, 6 months and 1 year after surgery. The basic fusion area was 3 vertebrae above and below the OPLL lesion. The mean operative time was 7 hours and 48 min (4 h 39 min-10 h 28 min), and blood loss was 1631 mL (160-11,731 mL). Intramedullary signal intensity change on magnetic resonance images was observed at the most severe ossification area in 18 patients. Interestingly, the rostral and caudal ossification regions of the OPLLs, as seen on sagittal CT images, were discontinuous across the disk space in all patients. Postoperatively, the discontinuous segments connected in all patients without progression of OPLL thickness by 5.1 months on average. All patients needing surgery had discontinuity across the disk space between the rostral and caudal ossified lesions as seen on CT. This discontinuity was considered to be the main reason for the myelopathy because a high-intensity area on magnetic resonance imaging was seen in 18 of 19 patients at the same level. Rigid fixation with instrumentation may allow the discontinuous segments to connect in patients without a concomitant thickening of the OPLL.
Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J
2012-01-01
A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617
A novel method for interactive multi-objective dose-guided patient positioning
NASA Astrophysics Data System (ADS)
Haehnle, Jonas; Süss, Philipp; Landry, Guillaume; Teichert, Katrin; Hille, Lucas; Hofmaier, Jan; Nowak, Dimitri; Kamp, Florian; Reiner, Michael; Thieke, Christian; Ganswindt, Ute; Belka, Claus; Parodi, Katia; Küfer, Karl-Heinz; Kurz, Christopher
2017-01-01
In intensity-modulated radiation therapy (IMRT), 3D in-room imaging data is typically utilized for accurate patient alignment on the basis of anatomical landmarks. In the presence of non-rigid anatomical changes, it is often not obvious which patient position is most suitable. Thus, dose-guided patient alignment is an interesting approach to use available in-room imaging data for up-to-date dose calculation, aimed at finding the position that yields the optimal dose distribution. This contribution presents the first implementation of dose-guided patient alignment as multi-criteria optimization problem. User-defined clinical objectives are employed for setting up a multi-objective problem. Using pre-calculated dose distributions at a limited number of patient shifts and dose interpolation, a continuous space of Pareto-efficient patient shifts becomes accessible. Pareto sliders facilitate interactive browsing of the possible shifts with real-time dose display to the user. Dose interpolation accuracy is validated and the potential of multi-objective dose-guided positioning demonstrated for three head and neck (H&N) and three prostate cancer patients. Dose-guided positioning is compared to replanning for all cases. A delineated replanning CT served as surrogate for in-room imaging data. Dose interpolation accuracy was high. Using a 2 % dose difference criterion, a median pass-rate of 95.7% for H&N and 99.6% for prostate cases was determined in a comparison to exact dose calculations. For all patients, dose-guided positioning allowed to find a clinically preferable dose distribution compared to bony anatomy based alignment. For all H&N cases, mean dose to the spared parotid glands was below 26~\\text{Gy} (up to 27.5~\\text{Gy} with bony alignment) and clinical target volume (CTV) {{V}95 % } above 99.1% (compared to 95.1%). For all prostate patients, CTV {{V}95 % } was above 98.9% (compared to 88.5%) and {{V}50~\\text{Gy}} to the rectum below 50 % (compared to 56.1%). Replanning yielded improved results for the H&N cases. For the prostate cases, differences to dose-guided positioning were minor.
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Ung, K A; White, R; Mathlum, M; Mak-Hau, V; Lynch, R
2014-01-01
In post-prostatectomy radiotherapy to the prostatic bed, consistent bladder volume is essential to maintain the position of treatment target volume. We assessed the differences between bladder volume readings from a portable bladder scanner (BS-V) and those obtained from planning CT (CT-V) or cone-beam CT (CBCT-V). Interfraction bladder volume variation was also determined. BS-V was recorded before and after planning CT or CBCT. The percentage differences between the readings using the two imaging modalities, standard deviations and 95% confidence intervals were determined. Data were analysed for the whole patient cohort and separately for the older BladderScan™ BVI3000 and newer BVI9400 model. Interfraction bladder volume variation was determined from the percentage difference between the CT-V and CBCT-V. Treatment duration, incorporating the time needed for BS and CBCT, was recorded. Fourteen patients were enrolled, producing 133 data sets for analysis. BS-V was taken using the BVI9400 in four patients (43 data sets). The mean BS-V was 253.2 mL, and the mean CT-V or CBCT-V was 199 cm(3). The mean percentage difference between the two modalities was 19.7% (SD 42.2; 95%CI 12.4 to 26.9). The BVI9400 model produced more consistent readings, with a mean percentage difference of -6.2% (SD 27.8; 95% CI -14.7 to -2.4%). The mean percentage difference between CT-V and CBCT-V was 31.3% (range -48% to 199.4%). Treatment duration from time of first BS reading to CBCT was, on average, 12 min (range 6-27). The BS produces bladder volume readings of an average 19.7% difference from CT-V or CBCT-V and can potentially be used to screen for large interfraction bladder volume variations in radiotherapy to prostatic bed. The observed interfraction bladder volume variation suggests the need to improve bladder volume consistency. Incorporating the BS into practice is feasible. © 2014 The Royal Australian and New Zealand College of Radiologists.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Brady, Samuel L; Moore, Bria M; Yee, Brian S; Kaufman, Robert A
2014-01-01
To determine a comprehensive method for the implementation of adaptive statistical iterative reconstruction (ASIR) for maximal radiation dose reduction in pediatric computed tomography (CT) without changing the magnitude of noise in the reconstructed image or the contrast-to-noise ratio (CNR) in the patient. The institutional review board waived the need to obtain informed consent for this HIPAA-compliant quality analysis. Chest and abdominopelvic CT images obtained before ASIR implementation (183 patient examinations; mean patient age, 8.8 years ± 6.2 [standard deviation]; range, 1 month to 27 years) were analyzed for image noise and CNR. These measurements were used in conjunction with noise models derived from anthropomorphic phantoms to establish new beam current-modulated CT parameters to implement 40% ASIR at 120 and 100 kVp without changing noise texture or magnitude. Image noise was assessed in images obtained after ASIR implementation (492 patient examinations; mean patient age, 7.6 years ± 5.4; range, 2 months to 28 years) the same way it was assessed in the pre-ASIR analysis. Dose reduction was determined by comparing size-specific dose estimates in the pre- and post-ASIR patient cohorts. Data were analyzed with paired t tests. With 40% ASIR implementation, the average relative dose reduction for chest CT was 39% (2.7/4.4 mGy), with a maximum reduction of 72% (5.3/18.8 mGy). The average relative dose reduction for abdominopelvic CT was 29% (4.8/6.8 mGy), with a maximum reduction of 64% (7.6/20.9 mGy). Beam current modulation was unnecessary for patients weighing 40 kg or less. The difference between 0% and 40% ASIR noise magnitude was less than 1 HU, with statistically nonsignificant increases in patient CNR at 100 kVp of 8% (15.3/14.2; P = .41) for chest CT and 13% (7.8/6.8; P = .40) for abdominopelvic CT. Radiation dose reduction at pediatric CT was achieved when 40% ASIR was implemented as a dose reduction tool only; no net change to the magnitude of noise in the reconstructed image or the patient CNR occurred. © RSNA, 2013.
The Interpolation Theory of Radial Basis Functions
NASA Astrophysics Data System (ADS)
Baxter, Brad
2010-06-01
In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.
Medical image processing on the GPU - past, present and future.
Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M
2013-12-01
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
Deformable known component model-based reconstruction for coronary CT angiography
NASA Astrophysics Data System (ADS)
Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.
2017-03-01
Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.
Improving image quality in laboratory x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.
2017-03-01
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
Investigations of interpolation errors of angle encoders for high precision angle metrology
NASA Astrophysics Data System (ADS)
Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa
2018-06-01
Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.
Pearce, Mark A
2015-08-01
EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.
The natural neighbor series manuals and source codes
NASA Astrophysics Data System (ADS)
Watson, Dave
1999-05-01
This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Quantum realization of the nearest neighbor value interpolation method for INEQR
NASA Astrophysics Data System (ADS)
Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping
2018-07-01
This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.
Ding, Kai; Cao, Kunlin; Fuld, Matthew K.; Du, Kaifang; Christensen, Gary E.; Hoffman, Eric A.; Reinhardt, Joseph M.
2012-01-01
Purpose: Regional lung volume change as a function of lung inflation serves as an index of parenchymal and airway status as well as an index of regional ventilation and can be used to detect pathologic changes over time. In this paper, the authors propose a new regional measure of lung mechanics—the specific air volume change by corrected Jacobian. The authors compare this new measure, along with two existing registration based measures of lung ventilation, to a regional ventilation measurement derived from xenon-CT (Xe-CT) imaging. Methods: 4DCT and Xe-CT datasets from four adult sheep are used in this study. Nonlinear, 3D image registration is applied to register an image acquired near end inspiration to an image acquired near end expiration. Approximately 200 annotated anatomical points are used as landmarks to evaluate registration accuracy. Three different registration based measures of regional lung mechanics are derived and compared: the specific air volume change calculated from the Jacobian (SAJ); the specific air volume change calculated by the corrected Jacobian (SACJ); and the specific air volume change by intensity change (SAI). The authors show that the commonly used SAI measure can be derived from the direct SAJ measure by using the air-tissue mixture model and assuming there is no tissue volume change between the end inspiration and end expiration datasets. All three ventilation measures are evaluated by comparing to Xe-CT estimates of regional ventilation. Results: After registration, the mean registration error is on the order of 1 mm. For cubical regions of interest (ROIs) in cubes with size 20 mm × 20 mm × 20 mm, the SAJ and SACJ measures show significantly higher correlation (linear regression, average r2 = 0.75 and r2 = 0.82) with the Xe-CT based measure of specific ventilation (sV) than the SAI measure. For ROIs in slabs along the ventral-dorsal vertical direction with size of 150 mm × 8 mm × 40 mm, the SAJ, SACJ, and SAI all show high correlation (linear regression, average r2 = 0.88, r2 = 0.92, and r2 = 0.87) with the Xe-CT based sV without significant differences when comparing between the three methods. The authors demonstrate a linear relationship between the difference of specific air volume change and difference of tissue volume in all four animals (linear regression, average r2 = 0.86). Conclusions: Given a deformation field by an image registration algorithm, significant differences between the SAJ, SACJ, and SAI measures were found at a regional level compared to the Xe-CT sV in four sheep that were studied. The SACJ introduced here, provides better correlations with Xe-CT based sV than the SAJ and SAI measures, thus providing an improved surrogate for regional ventilation. PMID:22894434