Science.gov

Sample records for 3d ct images

  1. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  2. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  3. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  4. Calculation of strain images of a breast-mimicking phantom from 3D CT image data.

    PubMed

    Kim, Jae G; Aowlad Hossain, A B M; Shin, Jong H; Lee, Soo Y

    2012-09-01

    Elastography is a medical imaging modality to visualize the elasticity of soft tissues. Ultrasound and MRI have been exclusively used for elastography of soft tissues since they can sensitize the tissues' minute displacements of an order of μm. It is known that ultrasound and MRI elastography show cancerous tissues with much higher contrast than conventional ultrasound and MRI. To evaluate possibility of combining elastography with x-ray imaging, we have calculated strain images of a breast-mimicking phantom from its 3D CT image data. We first simulated the x-ray elastography using a FEM model which incorporated both the elasticity and x-ray attenuation behaviors of breast tissues. After validating the x-ray elastography scheme by simulation, we made a breast-mimicking phantom that contained a hard inclusion against soft background. With a micro-CT, we took 3D images of the phantom twice, changing the compressing force to the phantom. From the two 3D phantom images taken with two different compression ratios, we calculated the displacement vector maps that represented the compression-induced pixel displacements. In calculating the displacement vectors, we tracked the movements of image feature patterns from the less-compressed-phantom images to the more-compressed-phantom images using the 3D image correlation technique. We obtained strain images of the phantom by differentiating the displacement vector maps. The FEM simulation has shown that x-ray strain imaging is possible by tracking image feature patterns in the 3D CT images of the breast-mimicking phantom. The experimental displacement and strain images of a breast-mimicking phantom, obtained from the 3D micro-CT images taken with 0%-3% compression ratios, show behaviors similar to the FEM simulation results. The contrast and noise performance of the strain images improves as the phantom compression ratio increases. We have experimentally shown that we can improve x-ray strain image quality by applying 3D

  5. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  6. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  7. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    NASA Astrophysics Data System (ADS)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  8. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  9. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  10. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  11. 3D CT-Video Fusion for Image-Guided Bronchoscopy

    PubMed Central

    Higgins, William E.; Helferty, James P.; Lu, Kongkuo; Merritt, Scott A.; Rai, Lav; Yu, Kun-Chang

    2008-01-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient’s three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods. PMID:18096365

  12. 3D CT-video fusion for image-guided bronchoscopy.

    PubMed

    Higgins, William E; Helferty, James P; Lu, Kongkuo; Merritt, Scott A; Rai, Lav; Yu, Kun-Chang

    2008-04-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient's three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.

  13. Technical note: cone beam CT imaging for 3D image guided brachytherapy for gynecological HDR brachytherapy.

    PubMed

    Reniers, Brigitte; Verhaegen, Frank

    2011-05-01

    This paper focuses on a novel image guidance technique for gynecological brachytherapy treatment. The present standard technique is orthogonal x-ray imaging to reconstruct the 3D position of the applicator when the availability of CT or MR is limited. Our purpose is to introduce 3D planning in the brachytherapy suite using a cone beam CT (CBCT) scanner dedicated to brachytherapy. This would avoid moving the patient between imaging and treatment procedures which may cause applicator motion. This could be used to replace the x-ray images or to verify the treatment position immediately prior to dose delivery. The sources of CBCT imaging artifacts in the case of brachytherapy were identified and removed where possible. The image quality was further improved by modifying the x-ray tube voltage, modifying the compensator bowtie filter and optimizing technical parameters such as the detector gain or tube current. The image quality was adequate to reconstruct the applicators in the treatment planning system. The position of points A and the localization of the organs at risk (OAR) ICRU points is easily achieved. This allows identification of cases where the rectum had moved with respect to the ICRU point which would require asymmetrical source loading. A better visualization is a first step toward a better sparing of the OAR. Treatment planning for gynecological brachytherapy is aided by CBCT images. CBCT presents advantages over CT: acquisition in the treatment room and in the treatment position due to the larger clearance of the CBCT, thereby reducing problems associated to moving patients between rooms.

  14. Inter-plane artifact suppression in tomosynthesis using 3D CT image data

    PubMed Central

    2011-01-01

    Background Despite its superb lateral resolution, flat-panel-detector (FPD) based tomosynthesis suffers from low contrast and inter-plane artifacts caused by incomplete cancellation of the projection components stemming from outside the focal plane. The incomplete cancellation of the projection components, mostly due to the limited scan angle in the conventional tomosynthesis scan geometry, often makes the image contrast too low to differentiate the malignant tissues from the background tissues with confidence. Methods In this paper, we propose a new method to suppress the inter-plane artifacts in FPD-based tomosynthesis. If 3D whole volume CT images are available before the tomosynthesis scan, the CT image data can be incorporated into the tomosynthesis image reconstruction to suppress the inter-plane artifacts, hence, improving the image contrast. In the proposed technique, the projection components stemming from outside the region-of-interest (ROI) are subtracted from the measured tomosynthesis projection data to suppress the inter-plane artifacts. The projection components stemming from outside the ROI are calculated from the 3D whole volume CT images which usually have lower lateral resolution than the tomosynthesis images. The tomosynthesis images are reconstructed from the subtracted projection data which account for the x-ray attenuation through the ROI. After verifying the proposed method by simulation, we have performed both CT scan and tomosynthesis scan on a phantom and a sacrificed rat using a FPD-based micro-CT. Results We have measured contrast-to-noise ratio (CNR) from the tomosynthesis images which is an indicator of the residual inter-plane artifacts on the focal-plane image. In both cases of the simulation and experimental imaging studies of the contrast evaluating phantom, CNRs have been significantly improved by the proposed method. In the rat imaging also, we have observed better visual contrast from the tomosynthesis images reconstructed by

  15. Inter-plane artifact suppression in tomosynthesis using 3D CT image data.

    PubMed

    Kim, Jae G; Jin, Seung O; Cho, Min H; Lee, Soo Y

    2011-12-10

    Despite its superb lateral resolution, flat-panel-detector (FPD) based tomosynthesis suffers from low contrast and inter-plane artifacts caused by incomplete cancellation of the projection components stemming from outside the focal plane. The incomplete cancellation of the projection components, mostly due to the limited scan angle in the conventional tomosynthesis scan geometry, often makes the image contrast too low to differentiate the malignant tissues from the background tissues with confidence. In this paper, we propose a new method to suppress the inter-plane artifacts in FPD-based tomosynthesis. If 3D whole volume CT images are available before the tomosynthesis scan, the CT image data can be incorporated into the tomosynthesis image reconstruction to suppress the inter-plane artifacts, hence, improving the image contrast. In the proposed technique, the projection components stemming from outside the region-of-interest (ROI) are subtracted from the measured tomosynthesis projection data to suppress the inter-plane artifacts. The projection components stemming from outside the ROI are calculated from the 3D whole volume CT images which usually have lower lateral resolution than the tomosynthesis images. The tomosynthesis images are reconstructed from the subtracted projection data which account for the x-ray attenuation through the ROI. After verifying the proposed method by simulation, we have performed both CT scan and tomosynthesis scan on a phantom and a sacrificed rat using a FPD-based micro-CT. We have measured contrast-to-noise ratio (CNR) from the tomosynthesis images which is an indicator of the residual inter-plane artifacts on the focal-plane image. In both cases of the simulation and experimental imaging studies of the contrast evaluating phantom, CNRs have been significantly improved by the proposed method. In the rat imaging also, we have observed better visual contrast from the tomosynthesis images reconstructed by the proposed method. The

  16. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  17. Improvement of image quality and dose management in CT fluoroscopy by iterative 3D image reconstruction.

    PubMed

    Grosser, Oliver S; Wybranski, Christian; Kupitz, Dennis; Powerski, Maciej; Mohnike, Konrad; Pech, Maciej; Amthauer, Holger; Ricke, Jens

    2017-09-01

    The objective of this study was to assess the influence of an iterative CT reconstruction algorithm (IA), newly available for CT-fluoroscopy (CTF), on image noise, readers' confidence and effective dose compared to filtered back projection (FBP). Data from 165 patients (FBP/IA = 82/74) with CTF in the thorax, abdomen and pelvis were included. Noise was analysed in a large-diameter vessel. The impact of reconstruction and variables (e.g. X-ray tube current I) influencing noise and effective dose were analysed by ANOVA and a pairwise t-test with Bonferroni-Holm correction. Noise and readers' confidence were evaluated by three readers. Noise was significantly influenced by reconstruction, I, body region and circumference (all p ≤ 0.0002). IA reduced the noise significantly compared to FBP (p = 0.02). The effect varied for body regions and circumferences (p ≤ 0.001). The effective dose was influenced by the reconstruction, body region, interventional procedure and I (all p ≤ 0.02). The inter-rater reliability for noise and readers' confidence was good (W ≥ 0.75, p < 0.0001). Noise and readers' confidence were significantly better in AIDR-3D compared to FBP (p ≤ 0.03). Generally, IA yielded a significant reduction of the median effective dose. The CTF reconstruction by IA showed a significant reduction in noise and effective dose while readers' confidence increased. • CTF is performed for image guidance in interventional radiology. • Patient exposure was estimated from DLP documented by the CT. • Iterative CT reconstruction is appropriate to reduce image noise in CTF. • Using iterative CT reconstruction, the effective dose was significantly reduced in abdominal interventions.

  18. Evaluation of accuracy of 3D reconstruction images using multi-detector CT and cone-beam CT

    PubMed Central

    Kim, Mija; YI, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul

    2012-01-01

    Purpose This study was performed to determine the accuracy of linear measurements on three-dimensional (3D) images using multi-detector computed tomography (MDCT) and cone-beam computed tomography (CBCT). Materials and Methods MDCT and CBCT were performed using 24 dry skulls. Twenty-one measurements were taken on the dry skulls using digital caliper. Both types of CT data were imported into OnDemand software and identification of landmarks on the 3D surface rendering images and calculation of linear measurements were performed. Reproducibility of the measurements was assessed using repeated measures ANOVA and ICC, and the measurements were statistically compared using a Student t-test. Results All assessments under the direct measurement and image-based measurements on the 3D CT surface rendering images using MDCT and CBCT showed no statistically difference under the ICC examination. The measurements showed no differences between the direct measurements of dry skull and the image-based measurements on the 3D CT surface rendering images (P>.05). Conclusion Three-dimensional reconstructed surface rendering images using MDCT and CBCT would be appropriate for 3D measurements. PMID:22474645

  19. 3D-SIFT-Flow for atlas-based CT liver image segmentation

    SciTech Connect

    Xu, Yan; Xu, Chenchao Kuang, Xiao; Wang, Hongkai; Chang, Eric I-Chao; Huang, Weimin; Fan, Yubo

    2016-05-15

    Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  20. 3D-SIFT-Flow for atlas-based CT liver image segmentation.

    PubMed

    Xu, Yan; Xu, Chenchao; Kuang, Xiao; Wang, Hongkai; Chang, Eric I-Chao; Huang, Weimin; Fan, Yubo

    2016-05-01

    In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  1. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  2. [Design of a 3D afterloading brachytherapy simulation system based on CT images].

    PubMed

    Yu, Hui; Xu, Hai-Rong; Zhang, Shu-Xu; Shi, Yu-Sheng; Qian, Jian-Yang

    2008-03-01

    To design a new afterloading brachytherapy simulation system based on CT images. This paper mainly focuses on the anthropomorphic pelvic phantom spiled by three pipelines and the nasopharyngeal carcinoma spiled by two pipelines. Microsoft Visual C++ was used to parse CT images for some information, then to reconstruct pipelines in the body of phantom or the patient and to give the three-dimensional coordinate of dwelling points. The dose distribution displayed on CT images was processed by the dose distribution calculation methods near single afterloading source and the dose optimization methods. VTK technology was used in the 3D display in the system. According to the reference points applied by doctors, the system can calculate reversely the dwelling time of dwelling points in pipelines and get satisfying dose distribution on CT images. Besides, it can reflect the 3D relationship between the dose volume and the normal tissues. This system overcomes some deficiencies of 2D afterloading brachytherapy simulation system based on X-ray films which are used widely in China. It supplies 3D display of dose distribution for clinical doctors. At present, the system is being tested in clinics.

  3. Imaging Properties of 3D Printed Materials: Multi-Energy CT of Filament Polymers.

    PubMed

    Shin, James; Sandhu, Ranjit S; Shih, George

    2017-02-06

    Clinical applications of 3D printing are increasingly commonplace, likewise the frequency of inclusion of 3D printed objects on imaging studies. Although there is a general familiarity with the imaging appearance of traditional materials comprising common surgical hardware and medical devices, comparatively less is known regarding the appearance of available 3D printing materials in the consumer market. This work detailing the CT appearance of a selected number of common filament polymer classes is an initial effort to catalog these data, to provide for accurate interpretation of imaging studies incidentally or intentionally including fabricated objects. Furthermore, this information can inform the design of image-realistic tissue-mimicking phantoms for a variety of applications, with clear candidate material analogs for bone, soft tissue, water, and fat attenuation.

  4. Segmentation of brain blood vessels using projections in 3-D CT angiography images.

    PubMed

    Babin, Danilo; Vansteenkiste, Ewout; Pizurica, Aleksandra; Philips, Wilfried

    2011-01-01

    Segmenting cerebral blood vessels is of great importance in diagnostic and clinical applications, especially in quantitative diagnostics and surgery on aneurysms and arteriovenous malformations (AVM). Segmentation of CT angiography images requires algorithms robust to high intensity noise, while being able to segment low-contrast vessels. Because of this, most of the existing methods require user intervention. In this work we propose an automatic algorithm for efficient segmentation of 3-D CT angiography images of cerebral blood vessels. Our method is robust to high intensity noise and is able to accurately segment blood vessels with high range of luminance values, as well as low-contrast vessels.

  5. Computer-aided diagnosis for osteoporosis using chest 3D CT images

    NASA Astrophysics Data System (ADS)

    Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2016-03-01

    The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.

  6. SU-E-J-209: Verification of 3D Surface Registration Between Stereograms and CT Images

    SciTech Connect

    Han, T; Gifford, K; Smith, B; Salehpour, M

    2014-06-01

    Purpose: Stereography can provide a visualization of the skin surface for radiation therapy patients. The aim of this study was to verify the registration algorithm in a commercial image analysis software, 3dMDVultus, for the fusion of stereograms and CT images. Methods: CT and stereographic scans were acquired of a head phantom and a deformable phantom. CT images were imported in 3dMDVultus and the surface contours were generated by threshold segmentation. Stereograms were reconstructed in 3dMDVultus. The resulting surfaces were registered with Vultus algorithm and then exported to in-house registration software and compared with four algorithms: rigid, affine, non-rigid iterative closest point (ICP) and b-spline algorithm. RMS (root-mean-square residuals of the surface point distances) error between the registered CT and stereogram surfaces was calculated and analyzed. Results: For the head phantom, the maximum RMS error between registered CT surfaces to stereogram was 6.6 mm for Vultus algorithm, whereas the mean RMS error was 0.7 mm. For the deformable phantom, the maximum RMS error was 16.2 mm for Vultus algorithm, whereas the mean RMS error was 4.4 mm. Non-rigid ICP demonstrated the best registration accuracy, as the mean of RMS errors were both within 1 mm. Conclusion: The accuracy of registration algorithm in 3dMDVultus was verified and exceeded RMS of 2 mm for deformable cases. Non-rigid ICP and b-spline algorithms improve the registration accuracy for both phantoms, especially in deformable one. For those patients whose body habitus deforms during radiation therapy, more advanced nonrigid algorithms need to be used.

  7. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  8. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  9. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  10. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  11. Automatic cerebrospinal fluid segmentation in non-contrast CT images using a 3D convolutional network

    NASA Astrophysics Data System (ADS)

    Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra

    2017-03-01

    Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.

  12. Pore detection in Computed Tomography (CT) soil 3D images using singularity map analysis

    NASA Astrophysics Data System (ADS)

    Sotoca, Juan J. Martin; Tarquis, Ana M.; Saa Requejo, Antonio; Grau, Juan B.

    2016-04-01

    X-ray Computed Tomography (CT) images have significantly helped the study of the internal soil structure. This technique has two main advantages: 1) it is a non-invasive technique, i.e., it doesńt modify the internal soil structure, and 2) it provides a good resolution. The major disadvantage is that these images are sometimes low-contrast in the solid/pore interface. One of the main problems in analyzing soil structure through CT images is to segment them in solid/pore space. To do so, we have different segmentation techniques at our disposal that are mainly based on thresholding methods in which global or local thresholds are calculated to separate pore space from solid space. The aim of this presentation is to develop the fractal approach to soil structure using "singularity maps" and the "Concentration-Area (CA) method". We will establish an analogy between mineralization processes in ore deposits and morphogenesis processes in soils. Resulting from this analogy a new 3D segmentation method is proposed, the "3D Singularity-CA" method. A comparison with traditional 3D segmentation methods will be performed to show the main differences among them.

  13. 3D SPECT/CT fusion using image data projection of bone SPECT onto 3D volume-rendered CT images: feasibility and clinical impact in the diagnosis of bone metastasis.

    PubMed

    Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro

    2017-05-01

    We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB

  14. CT image artifacts from brachytherapy seed implants: A postprocessing 3D adaptive median filter

    SciTech Connect

    Basran, Parminder S.; Robertson, Andrew; Wells, Derek

    2011-02-15

    Purpose: To design a postprocessing 3D adaptive median filter that minimizes streak artifacts and improves soft-tissue contrast in postoperative CT images of brachytherapy seed implantations. Methods: The filter works by identifying voxels that are likely streaks and estimating more reflective voxel intensity by using voxel intensities in adjacent CT slices and applying a median filter over voxels not identified as seeds. Median values are computed over a 5x5x5 mm region of interest (ROI) within the CT volume. An acrylic phantom simulating a clinical seed implant arrangement and containing nonradioactive seeds was created. Low contrast subvolumes of tissuelike material were also embedded in the phantom. Pre- and postprocessed image quality metrics were compared using the standard deviation of ROIs between the seeds, the CT numbers of low contrast ROIs embedded within the phantom, the signal to noise ratio (SNR), and the contrast to noise ratio (CNR) of the low contrast ROIs. The method was demonstrated with a clinical postimplant CT dataset. Results: After the filter was applied, the standard deviation of CT values in streak artifact regions was significantly reduced from 76.5 to 7.2 HU. Within the observable low contrast plugs, the mean of all ROI standard deviations was significantly reduced from 60.5 to 3.9 HU, SNR significantly increased from 2.3 to 22.4, and CNR significantly increased from 0.2 to 4.1 (all P<0.01). The mean CT in the low contrast plugs remained within 5 HU of the original values. Conclusion: An efficient postprocessing filter that does not require access to projection data, which can be applied irrespective of CT scan parameters has been developed, provided the slice thickness and spacing is 3 mm or less.

  15. Segmentation of bone structures in 3D CT images based on continuous max-flow optimization

    NASA Astrophysics Data System (ADS)

    Pérez-Carrasco, J. A.; Acha-Piñero, B.; Serrano, C.

    2015-03-01

    In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.

  16. Accuracy of 3D volumetric image registration based on CT, MR and PET/CT phantom experiments.

    PubMed

    Li, Guang; Xie, Huchen; Ning, Holly; Citrin, Deborah; Capala, Jacek; Maass-Moreno, Roberto; Guion, Peter; Arora, Barbara; Coleman, Norman; Camphausen, Kevin; Miller, Robert W

    2008-07-09

    Registration is critical for image-based treatment planning and image-guided treatment delivery. Although automatic registration is available, manual, visual-based image fusion using three orthogonal planar views (3P) is always employed clinically to verify and adjust an automatic registration result. However, the 3P fusion can be time consuming, observer dependent, as well as prone to errors, owing to the incomplete 3-dimensional (3D) volumetric image representations. It is also limited to single-pixel precision (the screen resolution). The 3D volumetric image registration (3DVIR) technique was developed to overcome these shortcomings. This technique introduces a 4th dimension in the registration criteria beyond the image volume, offering both visual and quantitative correlation of corresponding anatomic landmarks within the two registration images, facilitating a volumetric image alignment, and minimizing potential registration errors. The 3DVIR combines image classification in real-time to select and visualize a reliable anatomic landmark, rather than using all voxels for alignment. To determine the detection limit of the visual and quantitative 3DVIR criteria, slightly misaligned images were simulated and presented to eight clinical personnel for interpretation. Both of the criteria produce a detection limit of 0.1 mm and 0.1 degree. To determine the accuracy of the 3DVIR method, three imaging modalities (CT, MR and PET/CT) were used to acquire multiple phantom images with known spatial shifts. Lateral shifts were applied to these phantoms with displacement intervals of 5.0+/-0.1 mm. The accuracy of the 3DVIR technique was determined by comparing the image shifts determined through registration to the physical shifts made experimentally. The registration accuracy, together with precision, was found to be: 0.02+/-0.09 mm for CT/CT images, 0.03+/-0.07 mm for MR/MR images, and 0.03+/-0.35 mm for PET/CT images. This accuracy is consistent with the detection limit

  17. 3D segmentation of the true and false lumens on CT aortic dissection images

    NASA Astrophysics Data System (ADS)

    Fetnaci, Nawel; Łubniewski, Paweł; Miguel, Bruno; Lohou, Christophe

    2013-03-01

    Our works are related to aortic dissections which are a medical emergency and can quickly lead to death. In this paper, we want to retrieve in CT images the false and the true lumens which are aortic dissection features. Our aim is to provide a 3D view of the lumens that we can difficultly obtain either by volume rendering or by another visualization tool which only directly gives the outer contour of the aorta; or by other segmentation methods because they mainly directly segment either only the outer contour of the aorta or other connected arteries and organs both. In our work, we need to segment the two lumens separately; this segmentation will allow us to: distinguish them automatically, facilitate the landing of the aortic prosthesis, propose a virtual 3d navigation and do quantitative analysis. We chose to segment these data by using a deformable model based on the fast marching method. In the classical fast marching approach, a speed function is used to control the front propagation of a deforming curve. The speed function is only based on the image gradient. In our CT images, due to the low resolution, with the fast marching the front propagates from a lumen to the other; therefore, the gradient data is insufficient to have accurate segmentation results. In the paper, we have adapted the fast marching method more particularly by modifying the speed function and we succeed in segmenting the two lumens separately.

  18. Reconstructing 3D x-ray CT images of polymer gel dosimeters using the zero-scan method

    NASA Astrophysics Data System (ADS)

    Kakakhel, M. B.; Kairn, T.; Kenny, J.; Trapp, J. V.

    2013-06-01

    In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the 'zero-scan' method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner's x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.

  19. Micro-CT images reconstruction and 3D visualization for small animal studying

    NASA Astrophysics Data System (ADS)

    Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng

    2005-01-01

    A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.

  20. Automatic seed picking for brachytherapy postimplant validation with 3D CT images.

    PubMed

    Zhang, Guobin; Sun, Qiyuan; Jiang, Shan; Yang, Zhiyong; Ma, Xiaodong; Jiang, Haisong

    2017-06-22

    Postimplant validation is an indispensable part in the brachytherapy technique. It provides the necessary feedback to ensure the quality of operation. The ability to pick implanted seed relates directly to the accuracy of validation. To address it, an automatic approach is proposed for picking implanted brachytherapy seeds in 3D CT images. In order to pick seed configuration (location and orientation) efficiently, the approach starts with the segmentation of seed from CT images using a thresholding filter which based on gray-level histogram. Through the process of filtering and denoising, the touching seed and single seed are classified. The true novelty of this approach is found in the application of the canny edge detection and improved concave points matching algorithm to separate touching seeds. Through the computation of image moments, the seed configuration can be determined efficiently. Finally, two different experiments are designed to verify the performance of the proposed approach: (1) physical phantom with 60 model seeds, and (2) patient data with 16 cases. Through assessment of validated results by a medical physicist, the proposed method exhibited promising results. Experiment on phantom demonstrates that the error of seed location and orientation is within ([Formula: see text]) mm and ([Formula: see text])[Formula: see text], respectively. In addition, the most seed location and orientation error is controlled within 0.8 mm and 3.5[Formula: see text] in all cases, respectively. The average process time of seed picking is 8.7 s per 100 seeds. In this paper, an automatic, efficient and robust approach, performed on CT images, is proposed to determine the implanted seed location as well as orientation in a 3D workspace. Through the experiments with phantom and patient data, this approach also successfully exhibits good performance.

  1. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    PubMed Central

    2011-01-01

    Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast) and preoperative (radiographic template) models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology. PMID:21338504

  2. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  3. A positioning QA procedure for 2D/2D (kV/MV) and 3D/3D (CT/CBCT) image matching for radiotherapy patient setup.

    PubMed

    Guan, Huaiqun; Hammoud, Rabih; Yin, Fang-Fang

    2009-10-06

    A positioning QA procedure for Varian's 2D/2D (kV/MV) and 3D/3D (planCT/CBCT) matching was developed. The procedure was to check: (1) the coincidence of on-board imager (OBI), portal imager (PI), and cone beam CT (CBCT)'s isocenters (digital graticules) to a linac's isocenter (to a pre-specified accuracy); (2) that the positioning difference detected by 2D/2D (kV/MV) and 3D/3D(planCT/CBCT) matching can be reliably transferred to couch motion. A cube phantom with a 2 mm metal ball (bb) at the center was used. The bb was used to define the isocenter. Two additional bbs were placed on two phantom surfaces in order to define a spatial location of 1.5 cm anterior, 1.5 cm inferior, and 1.5 cm right from the isocenter. An axial scan of the phantom was acquired from a multislice CT simulator. The phantom was set at the linac's isocenter (lasers); either AP MV/R Lat kV images or CBCT images were taken for 2D/2D or 3D/3D matching, respectively. For 2D/2D, the accuracy of each device's isocenter was obtained by checking the distance between the central bb and the digital graticule. Then the central bb in orthogonal DRRs was manually moved to overlay to the off-axis bbs in kV/MV images. For 3D/3D, CBCT was first matched to planCT to check the isocenter difference between the two CTs. Manual shifts were then made by moving CBCT such that the point defined by the two off-axis bbs overlay to the central bb in planCT. (PlanCT can not be moved in the current version of OBI1.4.) The manual shifts were then applied to remotely move the couch. The room laser was used to check the accuracy of the couch movement. For Trilogy (or Ix-21) linacs, the coincidence of imager and linac's isocenter was better than 1 mm (or 1.5 mm). The couch shift accuracy was better than 2 mm.

  4. Automatic 3D pulmonary nodule detection in CT images: A survey.

    PubMed

    Valente, Igor Rafael S; Cortez, Paulo César; Neto, Edson Cavalcanti; Soares, José Marques; de Albuquerque, Victor Hugo C; Tavares, João Manuel R S

    2016-02-01

    This work presents a systematic review of techniques for the 3D automatic detection of pulmonary nodules in computerized-tomography (CT) images. Its main goals are to analyze the latest technology being used for the development of computational diagnostic tools to assist in the acquisition, storage and, mainly, processing and analysis of the biomedical data. Also, this work identifies the progress made, so far, evaluates the challenges to be overcome and provides an analysis of future prospects. As far as the authors know, this is the first time that a review is devoted exclusively to automated 3D techniques for the detection of pulmonary nodules from lung CT images, which makes this work of noteworthy value. The research covered the published works in the Web of Science, PubMed, Science Direct and IEEEXplore up to December 2014. Each work found that referred to automated 3D segmentation of the lungs was individually analyzed to identify its objective, methodology and results. Based on the analysis of the selected works, several studies were seen to be useful for the construction of medical diagnostic aid tools. However, there are certain aspects that still require attention such as increasing algorithm sensitivity, reducing the number of false positives, improving and optimizing the algorithm detection of different kinds of nodules with different sizes and shapes and, finally, the ability to integrate with the Electronic Medical Record Systems and Picture Archiving and Communication Systems. Based on this analysis, we can say that further research is needed to develop current techniques and that new algorithms are needed to overcome the identified drawbacks. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Calcification detection of abdominal aorta in CT images and 3D visualization in VR devices.

    PubMed

    Garcia-Berna, Jose A; Sanchez-Gomez, Juan M; Hermanns, Judith; Garcia-Mateos, Gines; Fernandez-Aleman, Jose L

    2016-08-01

    Automatic calcification detection in abdominal aorta consists of a set of computer vision techniques to quantify the amount of calcium that is found around this artery. Knowing that information, it is possible to perform statistical studies that relate vascular diseases with the presence of calcium in these structures. To facilitate the detection in CT images, a contrast is usually injected into the circulatory system of the patients to distinguish the aorta from other body tissues and organs. This contrast increases the absorption of X-rays by human blood, making it easier the measurement of calcifications. Based on this idea, a new system capable of detecting and tracking the aorta artery has been developed with an estimation of the calcium found surrounding the aorta. Besides, the system is complemented with a 3D visualization mode of the image set which is designed for the new generation of immersive VR devices.

  6. Feature extraction, analysis, and 3D visualization of local lung regions in volumetric CT images

    NASA Astrophysics Data System (ADS)

    Delegacz, Andrzej; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    2001-05-01

    The purpose of the work was to develop image functions for volumetric segmentation, feature extraction, and enhanced 3D visualization of local regions using CT datasets of human lungs. The system is aimed to assist the radiologist in the analysis of lung nodules. Volumetric datasets consisting of 30-50 thoracic helical low-dose CT slices were used in the study. The 3D topological characteristics of local structures including bronchi, blood vessels, and nodules were computed and evaluated. When a location of a region of interest is identified, the computer would automatically compute size, surface of the area, and normalized shape index of the suspected lesion. The developed system can also allow the user to perform interactive operation for evaluation of lung regions and structures through a user- friendly interface. These functions provide the user with a powerful tool to observe and investigate clinically interesting regions through unconventional radiographic viewings and analyses. The developed functions can also be used to view and analyze patient's lung abnormalities in surgical planning applications. Additionally, we see the possibility of using the system as a teaching tool for correlating anatomy of lungs.

  7. Reconstruction of 4D-CT from a Single Free-Breathing 3D-CT by Spatial-Temporal Image Registration

    PubMed Central

    Wu, Guorong; Wang, Qian; Lian, Jun; Shen, Dinggang

    2011-01-01

    In the radiation therapy of lung cancer, a free-breathing 3D-CT image is usually acquired in the treatment day for image-guided patient setup, by registering with the free-breathing 3D-CT image acquired in the planning day. In this way, the optimal dose plan computed in the planning day can be transferred onto the treatment day for cancer radiotherapy. However, patient setup based on the simple registration of the free-breathing 3D-CT images of the planning and the treatment days may mislead the radiotherapy, since the free-breathing 3D-CT is actually the mixed-phase image, with different slices often acquired from different respiratory phases. Moreover, a 4D-CT that is generally acquired in the planning day for improvement of dose planning is often ignored for guiding patient setup in the treatment day. To overcome these limitations, we present a novel two-step method to reconstruct the 4D-CT from a single free-breathing 3D-CT of the treatment day, by utilizing the 4D-CT model built in the planning day. Specifically, in the first step, we proposed a new spatial-temporal registration algorithm to align all phase images of the 4D-CT acquired in the planning day, for building a 4D-CT model with temporal correspondences established among all respiratory phases. In the second step, we first determine the optimal phase for each slice of the free-breathing (mixed-phase) 3D-CT of the treatment day by comparing with the 4D-CT of the planning day and thus obtain a sequence of partial 3D-CT images for the treatment day, each with only the incomplete image information in certain slices; and then we reconstruct a complete 4D-CT for the treatment day by warping the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built in the planning day. We have comprehensively evaluated our 4D-CT model building algorithm on a public lung image database, achieving the best registration

  8. Construction of Realistic Liver Phantoms from Patient Images using 3D Printer and Its Application in CT Image Quality Assessment

    PubMed Central

    Leng, Shuai; Yu, Lifeng; Vrieze, Thomas; Kuhlmann, Joel; Chen, Baiyu; McCollough, Cynthia H.

    2016-01-01

    The purpose of this study is to use 3D printing techniques to construct a realistic liver phantom with heterogeneous background and anatomic structures from patient CT images, and to use the phantom to assess image quality with filtered backprojection and iterative reconstruction algorithms. Patient CT images were segmented into liver tissues, contrast-enhanced vessels, and liver lesions using commercial software, based on which stereolithography (STL) files were created and sent to a commercial 3D printer. A 3D liver phantom was printed after assigning different printing materials to each object to simulate appropriate attenuation of each segmented object. As high opacity materials are not available for the printer, we printed hollow vessels and filled them with iodine solutions of adjusted concentration to represent enhance levels in contrast-enhanced liver scans. The printed phantom was then placed in a 35×26 cm oblong-shaped water phantom and scanned repeatedly at 4 dose levels. Images were reconstructed using standard filtered backprojection and an iterative reconstruction algorithm with 3 different strength settings. Heterogeneous liver background were observed from the CT images and the difference in CT numbers between lesions and background were representative for low contrast lesions in liver CT studies. CT numbers in vessels filled with iodine solutions represented the enhancement of liver arteries and veins. Images were run through a Channelized Hotelling model observer with Garbor channels and ROC analysis was performed. The AUC values showed performance improvement using the iterative reconstruction algorithm and the amount of improvement increased with strength setting. PMID:27721555

  9. Construction of Realistic Liver Phantoms from Patient Images using 3D Printer and Its Application in CT Image Quality Assessment.

    PubMed

    Leng, Shuai; Yu, Lifeng; Vrieze, Thomas; Kuhlmann, Joel; Chen, Baiyu; McCollough, Cynthia H

    2015-01-01

    The purpose of this study is to use 3D printing techniques to construct a realistic liver phantom with heterogeneous background and anatomic structures from patient CT images, and to use the phantom to assess image quality with filtered backprojection and iterative reconstruction algorithms. Patient CT images were segmented into liver tissues, contrast-enhanced vessels, and liver lesions using commercial software, based on which stereolithography (STL) files were created and sent to a commercial 3D printer. A 3D liver phantom was printed after assigning different printing materials to each object to simulate appropriate attenuation of each segmented object. As high opacity materials are not available for the printer, we printed hollow vessels and filled them with iodine solutions of adjusted concentration to represent enhance levels in contrast-enhanced liver scans. The printed phantom was then placed in a 35×26 cm oblong-shaped water phantom and scanned repeatedly at 4 dose levels. Images were reconstructed using standard filtered backprojection and an iterative reconstruction algorithm with 3 different strength settings. Heterogeneous liver background were observed from the CT images and the difference in CT numbers between lesions and background were representative for low contrast lesions in liver CT studies. CT numbers in vessels filled with iodine solutions represented the enhancement of liver arteries and veins. Images were run through a Channelized Hotelling model observer with Garbor channels and ROC analysis was performed. The AUC values showed performance improvement using the iterative reconstruction algorithm and the amount of improvement increased with strength setting.

  10. Construction of realistic liver phantoms from patient images using 3D printer and its application in CT image quality assessment

    NASA Astrophysics Data System (ADS)

    Leng, Shuai; Yu, Lifeng; Vrieze, Thomas; Kuhlmann, Joel; Chen, Baiyu; McCollough, Cynthia H.

    2015-03-01

    The purpose of this study is to use 3D printing techniques to construct a realistic liver phantom with heterogeneous background and anatomic structures from patient CT images, and to use the phantom to assess image quality with filtered back-projection and iterative reconstruction algorithms. Patient CT images were segmented into liver tissues, contrast-enhanced vessels, and liver lesions using commercial software, based on which stereolithography (STL) files were created and sent to a commercial 3D printer. A 3D liver phantom was printed after assigning different printing materials to each object to simulate appropriate attenuation of each segmented object. As high opacity materials are not available for the printer, we printed hollow vessels and filled them with iodine solutions of adjusted concentration to represent enhance levels in contrast-enhanced liver scans. The printed phantom was then placed in a 35×26 cm oblong-shaped water phantom and scanned repeatedly at 4 dose levels. Images were reconstructed using standard filtered back-projection and an iterative reconstruction algorithm with 3 different strength settings. Heterogeneous liver background were observed from the CT images and the difference in CT numbers between lesions and background were representative for low contrast lesions in liver CT studies. CT numbers in vessels filled with iodine solutions represented the enhancement of liver arteries and veins. Images were run through a Channelized Hotelling model observer with Garbor channels and ROC analysis was performed. The AUC values showed performance improvement using the iterative reconstruction algorithm and the amount of improvement increased with strength setting.

  11. Reliability analysis of Cobb angle measurements of congenital scoliosis using X-ray and 3D-CT images.

    PubMed

    Tauchi, Ryoji; Tsuji, Taichi; Cahill, Patrick J; Flynn, John M; Flynn, John M; Glotzbecker, Michael; El-Hawary, Ron; Heflin, John A; Imagama, Shiro; Joshi, Ajeya P; Nohara, Ayato; Ramirez, Norman; Roye, David P; Saito, Toshiki; Sawyer, Jeffrey R; Smith, John T; Kawakami, Noriaki

    2016-01-01

    Therapeutic decisions for congenital scoliosis rely on Cobb angle measurements on consecutive radiographs. There have been no studies documenting the variability of measuring the Cobb angle using 3D-CT images in children with congenital scoliosis. The purpose of this study was to compare the reliability and measurement errors using X-ray images and those utilizing 3D-CT images. The X-ray and 3D-CT images of 20 patients diagnosed with congenital scoliosis were used to assess the reliability of the digital 3D-CT images for the measurement of the Cobb angle. Thirteen observers performed the measurements, and each image was analyzed by each observer twice with a minimum interval of 1 week between measurements. The analysis of intraobserver variation was expressed as the mean absolute difference (MAD) and standard deviation (SD) between measurements and the intraclass correlation coefficient (IaCC) of the measurements. In addition, the interobserver variation was expressed as the MAD and interclass correlation coefficient (IeCC). The average MAD and SD was 4.5° and 3.2° by the X-ray method and 3.7° and 2.6° by the 3D-CT method. The intraobserver and interobserver intraclass ICCs were excellent in both methods (X-ray: IaCC 0.835-0.994 IeCC 0.847, 3D-CT: IaCC 0.819-0.996 IeCC 0.893). There was no significant MAD difference between X-ray and 3D-CT images in measuring each type of congenital scoliosis by each observer. Results of Cobb angle measurements in patients with congenital scoliosis using X-ray images in the frontal plane could be reproduced with almost the same measurement variance (3°-4° measurement error) using 3D-CT images. This suggests that X-ray images are clinically useful for assessing any type of congenital scoliosis about measuring the Cobb angle alone. However, since 3D-CT can provide more detailed images of the anterior and posterior components of malformed vertebrae, the volume of information that can be obtained by evaluating them has

  12. Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images.

    PubMed

    Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel

    2017-01-01

    The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images.

  13. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  14. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  15. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    NASA Astrophysics Data System (ADS)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  16. Pulmonary nodule classification based on CT density distribution using 3D thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiki; Niki, Noboru; Ohamatsu, Hironobu; Kusumoto, Masahiko; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, Kozo; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki

    2004-04-01

    Computer-aided diagnosis (CAD) has been investigated to provide physicians with quantitative information, such as estimates of the malignant likelihood, to aid in the classification of abnormalities detected at screening of lung cancers. The purpose of this study is to develop a method for classifying nodule density patterns that provides information with respect to nodule statuses such as lesion stage. This method consists of three steps, nodule segmentation, histogram analysis of CT density inside nodule, and classifying nodules into five types based on histogram patterns. In this paper, we introduce a two-dimensional (2-D) joint histogram with respect to distance from nodule center and CT density inside nodule and explore numerical features with respect to shape and position of the joint histogram.

  17. Patient specific respiratory motion modeling using a limited number of 3D lung CT images.

    PubMed

    Cui, Xueli; Gao, Xin; Xia, Wei; Liu, Yangchuan; Liang, Zhiyuan

    2014-01-01

    To build a patient specific respiratory motion model with a low dose, a novel method was proposed that uses a limited number of 3D lung CT volumes with an external respiratory signal. 4D lung CT volumes were acquired for patients with in vitro labeling on the upper abdominal surface. Meanwhile, 3D coordinates of in vitro labeling were measured as external respiratory signals. A sequential correspondence between the 4D lung CT and the external respiratory signal was built using the distance correlation method, and a 3D displacement for every registration control point in the CT volumes with respect to time can be obtained by the 4D lung CT deformable registration. A temporal fitting was performed for every registration control point displacements and an external respiratory signal in the anterior-posterior direction respectively to draw their fitting curves. Finally, a linear regression was used to fit the corresponding samples of the control point displacement fitting curves and the external respiratory signal fitting curve to finish the pulmonary respiration modeling. Compared to a B-spline-based method using the respiratory signal phase, the proposed method is highly advantageous as it offers comparable modeling accuracy and target modeling error (TME); while at the same time, the proposed method requires 70% less 3D lung CTs. When using a similar amount of 3D lung CT data, the mean of the proposed method's TME is smaller than the mean of the PCA (principle component analysis)-based methods' TMEs. The results indicate that the proposed method is successful in striking a balance between modeling accuracy and number of 3D lung CT volumes.

  18. Adapted morphing model for 3D volume reconstruction applied to abdominal CT images

    NASA Astrophysics Data System (ADS)

    Fadeev, Aleksey; Eltonsy, Nevine; Tourassi, Georgia; Martin, Robert; Elmaghraby, Adel

    2005-04-01

    The purpose of this study was to develop a 3D volume reconstruction model for volume rendering and apply this model to abdominal CT data. The model development includes two steps: (1) interpolation of given data for a complete 3D model, and (2) visualization. First, CT slices are interpolated using a special morphing algorithm. The main idea of this algorithm is to take a region from one CT slice and locate its most probable correspondence in the adjacent CT slice. The algorithm determines the transformation function of the region in between two adjacent CT slices and interpolates the data accordingly. The most probable correspondence of a region is obtained using correlation analysis between the given region and regions of the adjacent CT slice. By applying this technique recursively, taking progressively smaller subregions within a region, a high quality and accuracy interpolation is obtained. The main advantages of this morphing algorithm are 1) its applicability not only to parallel planes like CT slices but also to general configurations of planes in 3D space, and 2) its fully automated nature as it does not require control points to be specified by a user compared to most morphing techniques. Subsequently, to visualize data, a specialized volume rendering card (TeraRecon VolumePro 1000) was used. To represent data in 3D space, special software was developed to convert interpolated CT slices to 3D objects compatible with the VolumePro card. Visual comparison between the proposed model and linear interpolation clearly demonstrates the superiority of the proposed model.

  19. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer.

    PubMed

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae; Kim, Kwang Gi

    2015-07-01

    The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important.

  20. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer

    PubMed Central

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae

    2015-01-01

    Objectives The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. Methods We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Results Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. Conclusions These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important. PMID:26279958

  1. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  2. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  3. Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Alam, Naved; Khandelwal, Niranjan

    2013-02-01

    In this paper a differential geometry based method is proposed for calculating surface speculation of solitary pulmonary nodule (SPN) in 3D from lung CT images. Spiculation present in SPN is an important shape feature to assist radiologist for measurement of malignancy. Performance of Computer Aided Diagnostic (CAD) system depends on the accurate estimation of feature like spiculation. In the proposed method, the peak of the spicules is identified using the property of Gaussian and mean curvature calculated at each surface point on segmented SPN. Once the peak point for a particular SPN is identified, the nearest valley points for the corresponding peak point are determined. The area of cross-section of the best fitted plane passing through the valley points is the base of that spicule. The solid angle subtended by the base of spicule at peak point and the distance of peak point from nodule base are taken as the measures of spiculation. The speculation index (SI) for a particular SPN is the weighted combination of all the spicules present in that SPN. The proposed method is validated on 95 SPN from Imaging Database Resources Initiative (IDRI) public database. It has achieved 87.4% accuracy in calculating quantified spiculation index compared to the spiculation index provided by radiologists in IDRI database.

  4. Efficient 3D rigid-body registration of micro-MR and micro-CT trabecular bone images

    NASA Astrophysics Data System (ADS)

    Rajapakse, C. S.; Magland, J.; Wehrli, S. L.; Zhang, X. H.; Liu, X. S.; Guo, X. E.; Wehrli, F. W.

    2008-03-01

    Registration of 3D images acquired from different imaging modalities such as micro-magnetic resonance imaging (µMRI) and micro-computed tomography (µCT) are of interest in a number of medical imaging applications. Most general-purpose multimodality registration algorithms tend to be computationally intensive and do not take advantage of the shape of the imaging volume. Multimodality trabecular bone (TB) images of cylindrical cores, for example, tend to be misaligned along and around the axial direction more than that around other directions. Additionally, TB images acquired by µMRI can differ substantially from those acquired by µCT due to apparent trabecular thickening from magnetic susceptibility boundary effects and non-linear intensity correspondence. However, they share very similar contrast characteristics since the images essentially represent a binary tomographic system. The directional misalignment and the fundamental similarities of the two types of images can be exploited to achieve fast 3D registration. Here we present an intensity cross-correlation based 3D registration algorithm for registering 3D specimen images from cylindrical cores of cadaveric TB acquired by µMRI and µCT in the context of finite-element modeling to assess the bone's mechanical constants. The algorithm achieves the desired registration by first coarsely approximating the three translational and three rotational parameters required to align the µMR images to the µCT scan coordinate frame and fine-tuning the parameters in the neighborhood of the approximate solution. The algorithm described here is suitable for 3D rigid-body image registration applications where through-plane rotations are known to be relatively small. The accuracy of the technique is constrained by the image resolution and in-plane angular increments used.

  5. Therapeutic response assessment using 3D ultrasound for hepatic metastasis from colorectal cancer: Application of a personalized, 3D-printed tumor model using CT images

    PubMed Central

    Choi, Ye Ra; Park, Sang Joon; Hur, Bo Yun; Han, Joon Koo

    2017-01-01

    Background & aims To evaluate accuracy and reliability of three-dimensional ultrasound (3D US) for response evaluation of hepatic metastasis from colorectal cancer (CRC) using a personalized 3D-printed tumor model. Methods Twenty patients with liver metastasis from CRC who underwent baseline and after chemotherapy CT, were retrospectively included. Personalized 3D-printed tumor models using CT were fabricated. Two radiologists measured volume of each 3D printing model using 3D US. With CT as a reference, we compared difference between CT and US tumor volume. The response evaluation was based on Response Evaluation Criteria in Solid Tumors (RECIST) criteria. Results 3D US tumor volume showed no significant difference from CT volume (7.18 ± 5.44 mL, 8.31 ± 6.32 mL vs 7.42 ± 5.76 mL in CT, p>0.05). 3D US provided a high correlation coefficient with CT (r = 0.953, r = 0.97) as well as a high inter-observer intraclass correlation (0.978; 0.958–0.988). Regarding response, 3D US was in agreement with CT in 17 and 18 out of 20 patients for observer 1 and 2 with excellent agreement (κ = 0.961). Conclusions 3D US tumor volume using a personalized 3D-printed model is an accurate and reliable method for the response evaluation in comparison with CT tumor volume. PMID:28797089

  6. Automatic Segmentation of 3D Micro-CT Coronary Vascular Images

    SciTech Connect

    Lee,J.; Beighley, P.; Ritman, E.; Smith, N.

    2007-01-01

    Although there are many algorithms available in the literature aimed at segmentation and model reconstruction of 3D angiographic images, many are focused on characterizing only a part of the vascular network. This study is motivated by the recent emerging prospects of whole-organ simulations in coronary hemodynamics, autoregulation and tissue oxygen delivery for which anatomically accurate vascular meshes of extended scale are highly desirable. The key requirements of a reconstruction technique for this purpose are automation of processing and sub-voxel accuracy. We have designed a vascular reconstruction algorithm which satisfies these two criteria. It combines automatic seeding and tracking of vessels with radius detection based on active contours. The method was first examined through a series of tests on synthetic data, for accuracy in reproduced topology and morphology of the network and was shown to exhibit errors of less than 0.5 voxel for centerline and radius detections, and 3 for initial seed directions. The algorithm was then applied on real-world data of full rat coronary structure acquired using a micro-CT scanner at 20 {mu}m voxel size. For this, a further validation of radius quantification was carried out against a partially rescanned portion of the network at 8 {mu}m voxel size, which estimated less than 10% radius error in vessels larger than 2 voxels in radius.

  7. Automatic segmentation of 3D micro-CT coronary vascular images.

    PubMed

    Lee, Jack; Beighley, Patricia; Ritman, Erik; Smith, Nicolas

    2007-12-01

    Although there are many algorithms available in the literature aimed at segmentation and model reconstruction of 3D angiographic images, many are focused on characterizing only a part of the vascular network. This study is motivated by the recent emerging prospects of whole-organ simulations in coronary hemodynamics, autoregulation and tissue oxygen delivery for which anatomically accurate vascular meshes of extended scale are highly desirable. The key requirements of a reconstruction technique for this purpose are automation of processing and sub-voxel accuracy. We have designed a vascular reconstruction algorithm which satisfies these two criteria. It combines automatic seeding and tracking of vessels with radius detection based on active contours. The method was first examined through a series of tests on synthetic data, for accuracy in reproduced topology and morphology of the network and was shown to exhibit errors of less than 0.5 voxel for centerline and radius detections, and 3 degrees for initial seed directions. The algorithm was then applied on real-world data of full rat coronary structure acquired using a micro-CT scanner at 20 microm voxel size. For this, a further validation of radius quantification was carried out against a partially rescanned portion of the network at 8 microm voxel size, which estimated less than 10% radius error in vessels larger than 2 voxels in radius.

  8. A strain energy filter for 3D vessel enhancement with application to pulmonary CT images.

    PubMed

    Xiao, Changyan; Staring, Marius; Shamonin, Denis; Reiber, Johan H C; Stolk, Jan; Stoel, Berend C

    2011-02-01

    The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strength measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step-edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters.

  9. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    PubMed

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  10. Real-time respiratory phase matching between 2D fluoroscopic images and 3D CT images for precise percutaneous lung biopsy.

    PubMed

    Weon, Chijun; Kim, Mina; Park, Chang Min; Ra, Jong Beom

    2017-08-20

    A 3D CT image is used along with real-time 2D fluoroscopic images in the state-of-the-art cone-beam CT system to guide percutaneous lung biopsy (PLB). To improve the guiding accuracy by compensating for respiratory motion, we propose an algorithm for real-time matching of 2D fluoroscopic images to multiple 3D CT images of different respiratory phases that is robust to the small movement and deformation due to cardiac motion. Based on the transformations obtained from non-rigid registration between two 3D CT images acquired at expiratory and inspiratory phases, we first generate sequential 3D CT images (or a 4D CT image) and the corresponding 2D digitally reconstructed radiographs (DRRs) of vessels. We then determine 3D CT images corresponding to each real-time 2D fluoroscopic image, by matching the 2D fluoroscopic image to a 2D DRR. Quantitative evaluations performed with 20 clinical datasets show that registration errors of anatomical features between a 2D fluoroscopic image and its matched 2D DRR are less than 3mm on average. Registration errors of a target lesion are determined to be roughly 3mm on average for 10 datasets. We propose a real-time matching algorithm to compensate for respiratory motion between a 2D fluoroscopic image and 3D CT images of the lung, regardless of cardiac motion, based on a newly improved matching measure. The proposed algorithm can improve the accuracy of a guiding system for the PLB by providing 3D images precisely registered to 2D fluoroscopic images in real-time, without time-consuming respiratory gated or cardiac gated CT images. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. 3D texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan

    2013-02-01

    In this paper we have investigated a new approach for texture features extraction using co-occurrence matrix from volumetric lung CT image. Traditionally texture analysis is performed in 2D and is suitable for images collected from 2D imaging modality. The use of 3D imaging modalities provide the scope of texture analysis from 3D object and 3D texture feature are more realistic to represent 3D object. In this work, Haralick's texture features are extended in 3D and computed from volumetric data considering 26 neighbors. The optimal texture features to characterize the internal structure of Solitary Pulmonary Nodules (SPN) are selected based on area under curve (AUC) values of ROC curve and p values from 2-tailed Student's t-test. The selected texture feature in 3D to represent SPN can be used in efficient Computer Aided Diagnostic (CAD) design plays an important role in fast and accurate lung cancer screening. The reduced number of input features to the CAD system will decrease the computational time and classification errors caused by irrelevant features. In the present work, SPN are classified from Ground Glass Nodule (GGN) using Artificial Neural Network (ANN) classifier considering top five 3D texture features and top five 2D texture features separately. The classification is performed on 92 SPN and 25 GGN from Imaging Database Resources Initiative (IDRI) public database and classification accuracy using 3D texture features and 2D texture features provide 97.17% and 89.1% respectively.

  12. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  13. Clinical significance of creative 3D-image fusion across multimodalities [PET+CT+MR] based on characteristic coregistration.

    PubMed

    Peng, Matthew Jian-qiao; Ju, Xiangyang; Khambay, Balvinder S; Ayoub, Ashraf F; Chen, Chin-Tu; Bai, Bo

    2012-03-01

    To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. A cubic oriented scheme of"9-point & 3-plane" for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, "picking points to form planes" and "picking planes for segmentation" were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called "information exchange" and "signal overlay". The 2D and 3D images fused across modalities of [CT+MR], [PET+MR], [PET+CT] and [PET+CT+MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET+CT+MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-07

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  15. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  16. Supervised recursive segmentation of volumetric CT images for 3D reconstruction of lung and vessel tree.

    PubMed

    Li, Xuanping; Wang, Xue; Dai, Yixiang; Zhang, Pengbo

    2015-12-01

    Three dimensional reconstruction of lung and vessel tree has great significance to 3D observation and quantitative analysis for lung diseases. This paper presents non-sheltered 3D models of lung and vessel tree based on a supervised semi-3D lung tissues segmentation method. A recursive strategy based on geometric active contour is proposed instead of the "coarse-to-fine" framework in existing literature to extract lung tissues from the volumetric CT slices. In this model, the segmentation of the current slice is supervised by the result of the previous one slice due to the slight changes between adjacent slice of lung tissues. Through this mechanism, lung tissues in all the slices are segmented fast and accurately. The serious problems of left and right lungs fusion, caused by partial volume effects, and segmentation of pleural nodules can be settled meanwhile during the semi-3D process. The proposed scheme is evaluated by fifteen scans, from eight healthy participants and seven participants suffering from early-stage lung tumors. The results validate the good performance of the proposed method compared with the "coarse-to-fine" framework. The segmented datasets are utilized to reconstruct the non-sheltered 3D models of lung and vessel tree.

  17. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  18. High precision localization of intracerebral hemorrhage based on 3D MPR on head CT images

    NASA Astrophysics Data System (ADS)

    Sun, Jianyong; Hou, Xiaoshuai; Sun, Shujie; Zhang, Jianguo

    2017-03-01

    The key step for minimally invasive intracerebral hemorrhage surgery is precisely positioning the hematoma location in the brain before and during the hematoma surgery, which can significantly improves the success rate of puncture hematoma. We designed a 3D computerized surgical plan (CSP) workstation precisely to locate brain hematoma based on Multi-Planar Reconstruction (MPR) visualization technique. We used ten patients' CT/MR studies to verify our designed CSP intracerebral hemorrhage localization method. With the doctor's assessment and comparing with the results of manual measurements, the output of CSP WS for hematoma surgery is more precise and reliable than manual procedure.

  19. 3D electron density imaging using single scattered x rays with application to breast CT and mammographic screening

    NASA Astrophysics Data System (ADS)

    van Uytven, Eric Peter

    Screening mammography is the current standard in detecting breast cancer. However, its fundamental disadvantage is that it projects a 3D object into a 2D image. Small lesions are difficult to detect when superimposed over layers of normal tissue. Commercial Computed Tomography (CT) produces a true 3D image yet has a limited role in mammography due to relatively low resolution and contrast. With the intent of enhancing mammography and breast CT, we have developed an algorithm which can produce 3D electron density images using a single projection. Imaging an object with x rays produces a characteristic scattered photon spectrum at the detector plane. A known incident beam spectrum, beam shape, and arbitrary 3D matrix of electron density values enable a theoretical scattered photon distribution to be calculated. An iterative minimization algorithm is used to make changes to the electron density voxel matrix to reduce regular differences between the theoretical and the experimentally measured distributions. The object is characterized by the converged electron density image. This technique has been validated in simulation using data produced by the EGSnrc Monte Carlo code system. At both mammographic and CT energies, a scanning polychromatic pencil beam was used to image breast tissue phantoms containing lesion-like inhomogeneities. The resulting Monte Carlo data is processed using a Nelder-Mead iterative algorithm (MATLAB) to produce the 3D matrix of electron density values. Resulting images have confirmed the ability of the algorithm to detect various 1x1x2.5 mm3 lesions with calcification content as low as 0.5% (p<0.005) at a dose comparable to mammography.

  20. Regularization Designs for Uniform Spatial Resolution and Noise Properties in Statistical Image Reconstruction for 3D X-ray CT

    PubMed Central

    Cho, Jang Hwan; Fessler, Jeffrey A.

    2014-01-01

    Statistical image reconstruction methods for X-ray computed tomography (CT) provide improved spatial resolution and noise properties over conventional filtered back-projection (FBP) reconstruction, along with other potential advantages such as reduced patient dose and artifacts. Conventional regularized image reconstruction leads to spatially variant spatial resolution and noise characteristics because of interactions between the system models and the regularization. Previous regularization design methods aiming to solve such issues mostly rely on circulant approximations of the Fisher information matrix that are very inaccurate for undersampled geometries like short-scan cone-beam CT. This paper extends the regularization method proposed in [1] to 3D cone-beam CT by introducing a hypothetical scanning geometry that helps address the sampling properties. The proposed regularization designs were compared with the original method in [1] with both phantom simulation and clinical reconstruction in 3D axial X-ray CT. The proposed regularization methods yield improved spatial resolution or noise uniformity in statistical image reconstruction for short-scan axial cone-beam CT. PMID:25361500

  1. Regularization designs for uniform spatial resolution and noise properties in statistical image reconstruction for 3-D X-ray CT.

    PubMed

    Cho, Jang Hwan; Fessler, Jeffrey A

    2015-02-01

    Statistical image reconstruction methods for X-ray computed tomography (CT) provide improved spatial resolution and noise properties over conventional filtered back-projection (FBP) reconstruction, along with other potential advantages such as reduced patient dose and artifacts. Conventional regularized image reconstruction leads to spatially variant spatial resolution and noise characteristics because of interactions between the system models and the regularization. Previous regularization design methods aiming to solve such issues mostly rely on circulant approximations of the Fisher information matrix that are very inaccurate for undersampled geometries like short-scan cone-beam CT. This paper extends the regularization method proposed in to 3-D cone-beam CT by introducing a hypothetical scanning geometry that helps address the sampling properties. The proposed regularization designs were compared with the original method in with both phantom simulation and clinical reconstruction in 3-D axial X-ray CT. The proposed regularization methods yield improved spatial resolution or noise uniformity in statistical image reconstruction for short-scan axial cone-beam CT.

  2. US-CT 3D dual imaging by mutual display of the same sections for depicting minor changes in hepatocellular carcinoma.

    PubMed

    Fukuda, Hiroyuki; Ito, Ryu; Ohto, Masao; Sakamoto, Akio; Otsuka, Masayuki; Togawa, Akira; Miyazaki, Masaru; Yamagata, Hitoshi

    2012-09-01

    The purpose of this study was to evaluate the usefulness of ultrasound-computed tomography (US-CT) 3D dual imaging for the detection of small extranodular growths of hepatocellular carcinoma (HCC). The clinical and pathological profiles of 10 patients with single nodular type HCC with extranodular growth (extranodular growth) who underwent a hepatectomy were evaluated using two-dimensional (2D) ultrasonography (US), three-dimensional (3D) US, 3D computed tomography (CT) and 3D US-CT dual images. Raw 3D data was converted to DICOM (Digital Imaging and Communication in Medicine) data using Echo to CT (Toshiba Medical Systems Corp., Tokyo, Japan), and the 3D DICOM data was directly transferred to the image analysis system (ZioM900, ZIOSOFT Inc., Tokyo, Japan). By inputting the angle number (x, y, z) of the 3D CT volume data into the ZioM900, multiplanar reconstruction (MPR) images of the 3D CT data were displayed in a manner such that they resembled the conventional US images. Eleven extranodular growths were detected pathologically in 10 cases. 2D US was capable of depicting only 2 of the 11 extranodular growths. 3D CT was capable of depicting 4 of the 11 extranodular growths. On the other hand, 3D US was capable of depicting 10 of the 11 extranodular growths, and 3D US-CT dual images, which enable the dual analysis of the CT and US planes, revealed all 11 extranodular growths. In conclusion, US-CT 3D dual imaging may be useful for the detection of small extranodular growths.

  3. Sparse representation-based volumetric super-resolution algorithm for 3D CT images of reservoir rocks

    NASA Astrophysics Data System (ADS)

    Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong

    2017-09-01

    The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.

  4. Combining population and patient-specific characteristics for prostate segmentation on 3D CT images

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-03-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  5. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images

    PubMed Central

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-01-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy. PMID:27660382

  6. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images.

    PubMed

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M; Fei, Baowei

    2016-02-27

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  7. Quantitative 3D micro-CT imaging of the human feto-placental vasculature in intrauterine growth restriction.

    PubMed

    Langheinrich, A C; Vorman, S; Seidenstücker, J; Kampschulte, M; Bohle, R M; Wienhard, J; Zygmunt, M

    2008-11-01

    Placental vascular development matches fetal growth and development. Quantification of the feto-placental vasculature in placentas from pregnancies is complicated by intrauterine growth restriction (IUGR) revealed confounding results. Therefore, the feto-placental vascular volume in IUGR placentas was assessed by 3D micro-computed tomography (micro-CT). Placental probes from IUGR (n=24) and healthy control placentas (n=40) were perfused in situ with Microfil or BaSO(4) and randomly chosen samples were scanned by micro-CT. Using 3D images, we quantitated the feto-placental vascular volume fraction (VVF). A subanalysis was performed at three different levels, reaching from the chorionic plate artery (level A), to intermediate arteries (level B) and capillary system (level C). Results were complemented by histology. The significance of differences in vascular volume measurements was tested with analysis of variance [ANOVA]. Microfil perfused placentas showed a total vascular volume fraction of 20.5+/-0.9% in healthy controls. In contrast, the VVF decreased to 7.9+/-0.9% (p<0.001) in IUGR placentas. Significant differences were found between Microfil and BaSO(4) perfused placentas in the vascular volume fraction using micro-CT and histology. Micro-CT demonstrated localized concentric luminal encroachments in the intermediate arteries in placentas complicated by IUGR. Micro-CT imaging is feasible for quantitative analysis of the feto-placental vascular tree in healthy controls and pregnancies complicated by IUGR.

  8. Bone canalicular network segmentation in 3D nano-CT images through geodesic voting and image tessellation

    NASA Astrophysics Data System (ADS)

    Zuluaga, Maria A.; Orkisz, Maciej; Dong, Pei; Pacureanu, Alexandra; Gouttenoire, Pierre-Jean; Peyrin, Françoise

    2014-05-01

    Recent studies emphasized the role of the bone lacuno-canalicular network (LCN) in the understanding of bone diseases such as osteoporosis. However, suitable methods to investigate this structure are lacking. The aim of this paper is to introduce a methodology to segment the LCN from three-dimensional (3D) synchrotron radiation nano-CT images. Segmentation of such structures is challenging due to several factors such as limited contrast and signal-to-noise ratio, partial volume effects and huge number of data that needs to be processed, which restrains user interaction. We use an approach based on minimum-cost paths and geodesic voting, for which we propose a fully automatic initialization scheme based on a tessellation of the image domain. The centroids of pre-segmented lacunæ are used as Voronoi-tessellation seeds and as start-points of a fast-marching front propagation, whereas the end-points are distributed in the vicinity of each Voronoi-region boundary. This initialization scheme was devised to cope with complex biological structures involving cells interconnected by multiple thread-like, branching processes, while the seminal geodesic-voting method only copes with tree-like structures. Our method has been assessed quantitatively on phantom data and qualitatively on real datasets, demonstrating its feasibility. To the best of our knowledge, presented 3D renderings of lacunæ interconnected by their canaliculi were achieved for the first time.

  9. Local plate/rod descriptors of 3D trabecular bone micro-CT images from medial axis topologic analysis

    SciTech Connect

    Peyrin, Francoise; Attali, Dominique; Chappard, Christine; Benhamou, Claude Laurent

    2010-08-15

    Purpose: Trabecular bone microarchitecture is made of a complex network of plate and rod structures evolving with age and disease. The purpose of this article is to propose a new 3D local analysis method for the quantitative assessment of parameters related to the geometry of trabecular bone microarchitecture. Methods: The method is based on the topologic classification of the medial axis of the 3D image into branches, rods, and plates. Thanks to the reversibility of the medial axis, the classification is next extended to the whole 3D image. Finally, the percentages of rods and plates as well as their mean thicknesses are calculated. The method was applied both to simulated test images and 3D micro-CT images of human trabecular bone. Results: The classification of simulated phantoms made of plates and rods shows that the maximum error in the quantitative percentages of plate and rods is less than 6% and smaller than with the structure model index (SMI). Micro-CT images of human femoral bone taken in osteoporosis and early or advanced osteoarthritis were analyzed. Despite the large physiological variability, the present method avoids the underestimation of rods observed with other local methods. The relative percentages of rods and plates were not significantly different between osteoarthritis and osteoporotic groups, whereas their absolute percentages were in relation to an increase of rod and plate thicknesses in advanced osteoarthritis with also higher relative and absolute number of nodes. Conclusions: The proposed method is model-independent, robust to surface irregularities, and enables geometrical characterization of not only skeletal structures but entire 3D images. Its application provided more accurate results than the standard SMI on simple simulated phantoms, but the discrepancy observed on the advanced osteoarthritis group raises questions that will require further investigations. The systematic use of such a local method in the characterization of

  10. Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Liang, Zhengrong; Zhao, Hong

    2014-03-01

    Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.

  11. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  12. A novel 3D-printed phantom insert for 4D PET/CT imaging and simultaneous integrated boost radiotherapy.

    PubMed

    Cerviño, Laura; Soultan, Dima; Cornell, Mariel; Yock, Adam; Pettersson, Niclas; Song, William Y; Aguilera, Joseph; Advani, Sunil; Murphy, James; Hoh, Carl; James, Claude; Paravati, Anthony; Coope, Robin; Gill, Bradford; Moiseenko, Vitali

    2017-10-01

    To construct a 3D-printed phantom insert designed to mimic the variable PET tracer uptake seen in lung tumor volumes and a matching dosimetric insert to be used in simultaneous integrated boost (SIB) phantom studies, and to evaluate the design through end-to-end tests. A set of phantom inserts was designed and manufactured for a realistic representation of gated radiotherapy steps from 4D PET/CT scanning to dose delivery. A cylindrical phantom (φ80 × 120 mm) holds inserts for PET/CT scanning. The novel 3D printed insert dedicated to 4D PET/CT mimics high PET tracer uptake in the core and low uptake in the periphery. This insert is a variable density porous cylinder (φ44.5 × 70.0 mm), ABS-P430 thermoplastic, 3D printed by fused deposition modeling an inner (φ11 × 42 mm) cylindrical void. The square pores (1.8 × 1.8 mm(2) each) fill 50% of outer volume, resulting in a 2:1 PET tracer concentration ratio in the void volume with respect to porous volume. A matching cylindrical phantom insert is dedicated to validate gated radiotherapy. It contains eight peripheral holes and one central hole, matching the location of the porous part and the void part of the 3D printed insert, respectively. These holes accommodate adaptors for Farmer-type ion chamber and cells vials. End-to-end tests were designed for imaging, planning, and dose measurements. End-to-end test were performed from 4D PET/CT scanning to transferring data to the planning system, target volume delineation, and dose measurements. 4D PET/CT scans were acquired of the phantom at different respiratory motion patterns and gating windows. A measured 2:1 18F-FDG concentration ratio between inner void and outer porous volume matched the 3D printed design. Measured dose in the dosimetric insert agreed well with planned dose on the imaging insert, within 3% for the static phantom and within 5% for most breathing patterns. The novel 3D printed phantom insert mimics variable PET tracer uptake typical of tumors

  13. 2D and 3D Terahertz Imaging and X-Rays CT for Sigillography Study

    NASA Astrophysics Data System (ADS)

    Fabre, M.; Durand, R.; Bassel, L.; Recur, B.; Balacey, H.; Bou Sleiman, J.; Perraud, J.-B.; Mounaix, P.

    2017-04-01

    Seals are part of our cultural heritage but the study of these objects is limited because of their fragility. Terahertz and X-Ray imaging are used to analyze a collection of wax seals from the fourteenth to eighteenth centuries. In this work, both techniques are compared in order to discuss their advantages and limits and their complementarity for conservation state study of the samples. Thanks to 3D analysis and reconstructions, defects and fractures are detected with an estimation of their depth position. The path from the parchment tongue inside the seals is also detected.

  14. 2D and 3D Terahertz Imaging and X-Rays CT for Sigillography Study

    NASA Astrophysics Data System (ADS)

    Fabre, M.; Durand, R.; Bassel, L.; Recur, B.; Balacey, H.; Bou Sleiman, J.; Perraud, J.-B.; Mounaix, P.

    2017-01-01

    Seals are part of our cultural heritage but the study of these objects is limited because of their fragility. Terahertz and X-Ray imaging are used to analyze a collection of wax seals from the fourteenth to eighteenth centuries. In this work, both techniques are compared in order to discuss their advantages and limits and their complementarity for conservation state study of the samples. Thanks to 3D analysis and reconstructions, defects and fractures are detected with an estimation of their depth position. The path from the parchment tongue inside the seals is also detected.

  15. Iterative mesh transformation for 3D segmentation of livers with cancers in CT images.

    PubMed

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-07-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semi-automated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases.

  16. Iterative Mesh Transformation for 3D Segmentation of Livers with Cancers in CT Images

    PubMed Central

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-01-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semiautomated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases. PMID:25728595

  17. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images

    PubMed Central

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  18. A visual data-mining approach using 3D thoracic CT images for classification between benign and malignant pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiki; Niki, Noboru; Ohamatsu, Hironobu; Kusumoto, Masahiko; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, K.; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki

    2003-05-01

    This paper presents a visual data-mining approach to assist physicians for classification between benign and malignant pulmonary nodules. This approach retrieves and displays nodules which exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. The central module in this approach makes possible analysis of the query nodule image and extraction of the features of interest: shape, surrounding structure, and internal structure of the nodules. The nodule shape is characterized by principal axes, while the surrounding and internal structure is represented by the distribution pattern of CT density and 3-D curvature indexes. The nodule representation is then applied to a similarity measure such as a correlation coefficient. For each query case, we sort all the nodules of the database from most to less similar ones. By applying the retrieval method to our database, we present its feasibility to search the similar 3-D nodule images.

  19. Pancreas segmentation from 3D abdominal CT images using patient-specific weighted subspatial probabilistic atlases

    NASA Astrophysics Data System (ADS)

    Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku

    2015-03-01

    Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.

  20. 3D segmentation of abdominal aorta from CT-scan and MR images.

    PubMed

    Duquette, Anthony Adam; Jodoin, Pierre-Marc; Bouchot, Olivier; Lalande, Alain

    2012-06-01

    We designed a generic method for segmenting the aneurismal sac of an abdominal aortic aneurysm (AAA) both from multi-slice MR and CT-scan examinations. It is a semi-automatic method requiring little human intervention and based on graph cut theory to segment the lumen interface and the aortic wall of AAAs. Our segmentation method works independently on MRI and CT-scan volumes and has been tested on a 44 patient dataset and 10 synthetic images. Segmentation and maximum diameter estimation were compared to manual tracing from 4 experts. An inter-observer study was performed in order to measure the variability range of a human observer. Based on three metrics (the maximum aortic diameter, the volume overlap and the Hausdorff distance) the variability of the results obtained by our method is shown to be similar to that of a human operator, both for the lumen interface and the aortic wall. As will be shown, the average distance obtained with our method is less than one standard deviation away from each expert, both for healthy subjects and for patients with AAA. Our semi-automatic method provides reliable contours of the abdominal aorta from CT-scan or MRI, allowing rapid and reproducible evaluations of AAA.

  1. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  2. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  3. 3D printing for orthopedic applications: from high resolution cone beam CT images to life size physical models

    NASA Astrophysics Data System (ADS)

    Jackson, Amiee; Ray, Lawrence A.; Dangi, Shusil; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2017-03-01

    With increasing resolution in image acquisition, the project explores capabilities of printing toward faithfully reflecting detail and features depicted in medical images. To improve safety and efficiency of orthopedic surgery and spatial conceptualization in training and education, this project focused on generating virtual models of orthopedic anatomy from clinical quality computed tomography (CT) image datasets and manufacturing life-size physical models of the anatomy using 3D printing tools. Beginning with raw micro CT data, several image segmentation techniques including thresholding, edge recognition, and region-growing algorithms available in packages such as ITK-SNAP, MITK, or Mimics, were utilized to separate bone from surrounding soft tissue. After converting the resulting data to a standard 3D printing format, stereolithography (STL), the STL file was edited using Meshlab, Netfabb, and Meshmixer. The editing process was necessary to ensure a fully connected surface (no loose elements), positive volume with manifold geometry (geometry possible in the 3D physical world), and a single, closed shell. The resulting surface was then imported into a "slicing" software to scale and orient for printing on a Flashforge Creator Pro. In printing, relationships between orientation, print bed volume, model quality, material use and cost, and print time were considered. We generated anatomical models of the hand, elbow, knee, ankle, and foot from both low-dose high-resolution cone-beam CT images acquired using the soon to be released scanner developed by Carestream, as well as scaled models of the skeletal anatomy of the arm and leg, together with life-size models of the hand and foot.

  4. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  5. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    PubMed

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  6. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  7. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  8. Parametric modeling of the intervertebral disc space in 3D: application to CT images of the lumbar spine.

    PubMed

    Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2014-10-01

    Gradual degeneration of intervertebral discs of the lumbar spine is one of the most common causes of low back pain. Although conservative treatment for low back pain may provide relief to most individuals, surgical intervention may be required for individuals with significant continuing symptoms, which is usually performed by replacing the degenerated intervertebral disc with an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study, we propose a method for parametric modeling of the intervertebral disc space in three dimensions (3D) and show its application to computed tomography (CT) images of the lumbar spine. The initial 3D model of the intervertebral disc space is generated according to the superquadric approach and therefore represented by a truncated elliptical cone, which is initialized by parameters obtained from 3D models of adjacent vertebral bodies. In an optimization procedure, the 3D model of the intervertebral disc space is incrementally deformed by adding parameters that provide a more detailed morphometric description of the observed shape, and aligned to the observed intervertebral disc space in the 3D image. By applying the proposed method to CT images of 20 lumbar spines, the shape and pose of each of the 100 intervertebral disc spaces were represented by a 3D parametric model. The resulting mean (±standard deviation) accuracy of modeling was 1.06±0.98mm in terms of radial Euclidean distance against manually defined ground truth points, with the corresponding success rate of 93% (i.e. 93 out of 100 intervertebral disc spaces were modeled successfully). As the resulting 3D models provide a description of the shape of intervertebral disc spaces in a complete parametric form, morphometric analysis was straightforwardly enabled and allowed the computation of the corresponding

  9. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  10. Anatomy of hepatic arteriolo-portal venular shunts evaluated by 3D micro-CT imaging.

    PubMed

    Kline, Timothy L; Knudsen, Bruce E; Anderson, Jill L; Vercnocke, Andrew J; Jorgensen, Steven M; Ritman, Erik L

    2014-06-01

    The liver differs from other organs in that two vascular systems deliver its blood - the hepatic artery and the portal vein. However, how the two systems interact is not fully understood. We therefore studied the microvascular geometry of rat liver hepatic artery and portal vein injected with the contrast polymer Microfil(®). Intact isolated rat livers were imaged by micro-CT and anatomic evidence for hepatic arteriolo-portal venular shunts occurring between hepatic artery and portal vein branches was found. Simulations were performed to rule out the possibility of the observed shunts being artifacts resulting from image blurring. In addition, in the case of specimens where only the portal vein was injected, only the portal vein was opacified, whereas in hepatic artery injections, both the hepatic artery and portal vein were opacified. We conclude that mixing of the hepatic artery and portal vein blood can occur proximal to the sinusoidal level, and that the hepatic arteriolo-portal venular shunts may function as a one-way valve-like mechanism, allowing flow only from the hepatic artery to the portal vein (and not the other way around).

  11. Thin slice three dimentional (3D) reconstruction versus CT 3D reconstruction of human breast cancer

    PubMed Central

    Zhang, Yi; Zhou, Yan; Yang, Xinhua; Tang, Peng; Qiu, Quanguang; Liang, Yong; Jiang, Jun

    2013-01-01

    Background & objectives: With improvement in the early diagnosis of breast cancer, breast conserving therapy (BCT) is being increasingly used. Precise preoperative evaluation of the incision margin is, therefore, very important. Utilizing three dimentional (3D) images in a preoperative evaluation for breast conserving surgery has considerable significance, but the currently 3D CT scan reconstruction commonly used has problems in accurately displaying breast cancer. Thin slice 3D reconstruction is also widely used now to delineate organs and tissues of breast cancers. This study was aimed to compare 3D CT with thin slice 3D reconstruction in breast cancer patients to find a better technique for accurate evaluation of breast cancer. Methods: A total of 16-slice spiral CT scans and 3D reconstructions were performed on 15 breast cancer patients. All patients had been treated with modified radical mastectomy; 2D and 3D images of breast and tumours were obtained. The specimens were fixed and sliced at 2 mm thickness to obtain serial thin slice images, and reconstructed using 3D DOCTOR software to gain 3D images. Results: Compared with 2D CT images, thin slice images showed more clearly the morphological characteristics of tumour, breast tissues and the margins of different tissues in each slice. After 3D reconstruction, the tumour shapes obtained by the two reconstruction methods were basically the same, but the thin slice 3D reconstruction showed the tumour margins more clearly. Interpretation & conclusions: Compared with 3D CT reconstruction, thin slice 3D reconstruction of breast tumour gave clearer images, which could provide guidance for the observation and application of CT 3D reconstructed images and contribute to the accurate evaluation of tumours using CT imaging technology. PMID:23481052

  12. Extraction of 3D Femur Neck Trabecular Bone Architecture from Clinical CT Images in Osteoporotic Evaluation: a Novel Framework.

    PubMed

    Sapthagirivasan, V; Anburajan, M; Janarthanam, S

    2015-08-01

    The early detection of osteoporosis risk enhances the lifespan and quality of life of an individual. A reasonable in-vivo assessment of trabecular bone strength at the proximal femur helps to evaluate the fracture risk and henceforth, to understand the associated structural dynamics on occurrence of osteoporosis. The main aim of our study was to develop a framework to automatically determine the trabecular bone strength from clinical femur CT images and thereby to estimate its correlation with BMD. All the 50 studied south Indian female subjects aged 30 to 80 years underwent CT and DXA measurements at right femur region. Initially, the original CT slices were intensified and active contour model was utilised for the extraction of the neck region. After processing through a novel process called trabecular enrichment approach (TEA), the three dimensional (3D) trabecular features were extracted. The extracted 3D trabecular features, such as volume fraction (VF), solidity of delta points (SDP) and boundness, demonstrated a significant correlation with femoral neck bone mineral density (r = 0.551, r = 0.432, r = 0.552 respectively) at p < 0.001. The higher area under the curve values of the extracted features (VF: 85.3 %; 95CI: 68.2-100 %, SDP: 82.1 %; 95CI: 65.1-98.9 % and boundness: 90.4 %; 95CI: 78.7-100 %) were observed. The findings suggest that the proposed framework with TEA method would be useful for spotting women vulnerable to osteoporotic risk.

  13. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data.

    PubMed

    Savva, Antonis D; Economopoulos, Theodore L; Matsopoulos, George K

    2016-02-01

    Spatial alignment of Computed Tomography (CT) data sets is often required in numerous medical applications and it is usually achieved by applying conventional exhaustive registration techniques, which are mainly based on the intensity of the subject data sets. Those techniques consider the full range of data points composing the data, thus negatively affecting the required processing time. Alternatively, alignment can be performed using the correspondence of extracted data points from both sets. Moreover, various geometrical characteristics of those data points can be used, instead of their chromatic properties, for uniquely characterizing each point, by forming a specific geometrical descriptor. This paper presents a comparative study reviewing variations of geometry-based, descriptor-oriented registration techniques, as well as conventional, exhaustive, intensity-based methods for aligning three-dimensional (3D) CT data pairs. In this context, three general image registration frameworks were examined: a geometry-based methodology featuring three distinct geometrical descriptors, an intensity-based methodology using three different similarity metrics, as well as the commonly used Iterative Closest Point algorithm. All techniques were applied on a total of thirty 3D CT data pairs with both known and unknown initial spatial differences. After an extensive qualitative and quantitative assessment, it was concluded that the proposed geometry-based registration framework performed similarly to the examined exhaustive registration techniques. In addition, geometry-based methods dramatically improved processing time over conventional exhaustive registration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Automated quantitative Rb-82 3D PET/CT myocardial perfusion imaging: normal limits and correlation with invasive coronary angiography.

    PubMed

    Nakazato, Ryo; Berman, Daniel S; Dey, Damini; Le Meunier, Ludovic; Hayes, Sean W; Fermin, Jimmy S; Cheng, Victor Y; Thomson, Louise E J; Friedman, John D; Germano, Guido; Slomka, Piotr J

    2012-04-01

    We aimed to characterize normal limits and to determine the diagnostic accuracy for an automated quantification of 3D 82-Rubidium (Rb-82) PET/CT myocardial perfusion imaging (MPI). We studied 125 consecutive patients undergoing Rb-82 PET/CT MPI, including patients with suspected coronary artery disease (CAD) and invasive coronary angiography, and 42 patients with a low likelihood (LLk) of CAD. Normal limits for perfusion and function were derived from LLk patients. QPET software was used to quantify perfusion abnormality at rest and stress expressed as total perfusion deficit (TPD). Relative perfusion databases did not differ in any of the 17 segments between males and females. The areas under the receiver operating characteristic curve for detection of CAD were 0.86 for identification of ≥50% and ≥70% stenosis. The sensitivity/specificity was 86%/86% for detecting ≥50% stenosis and 93%/77% for ≥70% stenosis, respectively. In regard to normal limits, mean rest and stress left ventricular ejection fraction (LVEF) were 67% ± 10% and 75% ± 9%, respectively. Mean transient ischemic dilation ratio was 1.06 ± 0.14 and mean increase in LVEF with stress was 7.4% ± 6.1% (95th percentile of 0%). Normal limits have been established for 3D Rb-82 PET/CT analysis with QPET software. Fully automated quantification of myocardial perfusion PET data shows high diagnostic accuracy for detecting obstructive CAD.

  15. 3D image analysis of plants using electron tomography and micro-CT.

    PubMed

    Mineyuki, Yoshinobu

    2014-11-01

    help to promote MT bundling. Cell plate attachment to the parental wall leads to the fusion of the newly formed middle lamellae in the cell plate to the middle lamella of parental cell wall, and a three-way junction is created. Air space develops from the three-way junction. To determine 3D arrangement of cells and air spaces, we used X-ray micro-CT at the SPring-8 synchrotron radiation facility. Using micro-CT available in BL20XU (8 keV, 0.2 µm/pixel), we were able to elucidate ∼90% of the cortical cell outlines in the hypocotyl-radicle axis of arabidopsis seeds [4] and to analyze cell geometrical properties. As the strength of the system X-ray is too strong for seed survival, we used another beam line BL20B2 (10-15 keV, 2.4-2.7 µm/pixel) to examine air space development during seed imbibition [4,5]. Using this system, we were able to detect air space development at the early imbibition stages of seeds without causing damage during seed germination. AcknowledgmentThe author would like to thank Dr. Ichirou Karahara (Univ. Toyama), Dr. L. Andrew Staehelin (Univ. Colorado), Ms. Naoko Kajimura, Dr. Akio Takaoka (Osaka Univ.), Dr. Kazuyo Misaki, Dr. Shigenobu Yonemura (RIKEN CDB), Dr. Kazuyoshi Murata (NIP), Dr. Kentaro Uesugi, Dr. Akihisa Takeuchi, Dr. Yoshio Suzuki (JASRI), Dr. Miyuki Takeuchi, Dr. Daisuke Tamaoki, Dr. Daisuke Yamauchi, and Ms. Aki Fukuda (Univ. Hyogo) for their collaborations in the work presented here. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  17. Automated detection of retinal cell nuclei in 3D micro-CT images of zebrafish using support vector machine classification

    NASA Astrophysics Data System (ADS)

    Ding, Yifu; Tavolara, Thomas; Cheng, Keith

    2016-03-01

    Our group is developing a method to examine biological specimens in cellular detail using synchrotron microCT. The method can acquire 3D images of tissue at micrometer-scale resolutions, allowing for individual cell types to be visualized in the context of the entire specimen. For model organism research, this tool will enable the rapid characterization of tissue architecture and cellular morphology from every organ system. This characterization is critical for proposed and ongoing "phenome" projects that aim to phenotype whole-organism mutants and diseased tissues from different organisms including humans. With the envisioned collection of hundreds to thousands of images for a phenome project, it is important to develop quantitative image analysis tools for the automated scoring of organism phenotypes across organ systems. Here we present a first step towards that goal, demonstrating the use of support vector machines (SVM) in detecting retinal cell nuclei in 3D images of wild-type zebrafish. In addition, we apply the SVM classifier on a mutant zebrafish to examine whether SVMs can be used to capture phenotypic differences in these images. The longterm goal of this work is to allow cellular and tissue morphology to be characterized quantitatively for many organ systems, at the level of the whole-organism.

  18. An innovative strategy for the identification and 3D reconstruction of pancreatic cancer from CT images.

    PubMed

    Marconi, S; Pugliese, L; Del Chiaro, M; Pozzi Mucelli, R; Auricchio, F; Pietrabissa, A

    2016-09-01

    We propose an innovative tool for Pancreatic Ductal AdenoCarcinoma 3D reconstruction from Multi-Detector-Computed Tomography. The tumor mass is discriminated from health tissue, and the resulting segmentation labels are rendered preserving information on different hypodensity levels. The final 3D virtual model includes also pancreas and main peri-pancreatic vessels, and it is suitable for 3D printing. We performed a preliminary evaluation of the tool effectiveness presenting ten cases of Pancreatic Ductal AdenoCarcinoma processed with the tool to an expert radiologist who can correct the result of the discrimination. In seven of ten cases, the 3D reconstruction is accepted without any modification, while in three cases, only 1.88, 5.13, and 5.70 %, respectively, of the segmentation labels are modified, preliminary proving the high effectiveness of the tool.

  19. Adaptive iterative dose reduction (AIDR) 3D in low dose CT abdomen-pelvis: Effects on image quality and radiation exposure

    NASA Astrophysics Data System (ADS)

    Ang, W. C.; Hashim, S.; Karim, M. K. A.; Bahruddin, N. A.; Salehhon, N.; Musa, Y.

    2017-05-01

    The widespread use of computed tomography (CT) has increased the medical radiation exposure and cancer risk. We aimed to evaluate the impact of AIDR 3D in CT abdomen-pelvic examinations based on image quality and radiation dose in low dose (LD) setting compared to standard dose (STD) with filtered back projection (FBP) reconstruction. We retrospectively reviewed the images of 40 patients who underwent CT abdomen-pelvic using a 80 slice CT scanner. Group 1 patients (n=20, mean age 41 ± 17 years) were performed at LD with AIDR 3D reconstruction and Group 2 patients (n=20, mean age 52 ± 21 years) were scanned with STD using FBP reconstruction. Objective image noise was assessed by region of interest (ROI) measurements in the liver and aorta as standard deviation (SD) of the attenuation value (Hounsfield Unit, HU) while subjective image quality was evaluated by two radiologists. Statistical analysis was used to compare the scan length, CT dose index volume (CTDIvol) and image quality of both patient groups. Although both groups have similar mean scan length, the CTDIvol significantly decreased by 38% in LD CT compared to STD CT (p<0.05). Objective and subjective image quality were statistically improved with AIDR 3D (p<0.05). In conclusion, AIDR 3D enables significant dose reduction of 38% with superior image quality in LD CT abdomen-pelvis.

  20. Automated assessment of breast tissue density in non-contrast 3D CT images without image segmentation based on a deep CNN

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Koyasu, Hiromi; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Matsuo, Masayuki; Fujita, Hiroshi

    2017-03-01

    This paper describes a novel approach for the automatic assessment of breast density in non-contrast three-dimensional computed tomography (3D CT) images. The proposed approach trains and uses a deep convolutional neural network (CNN) from scratch to classify breast tissue density directly from CT images without segmenting the anatomical structures, which creates a bottleneck in conventional approaches. Our scheme determines breast density in a 3D breast region by decomposing the 3D region into several radial 2D-sections from the nipple, and measuring the distribution of breast tissue densities on each 2D section from different orientations. The whole scheme is designed as a compact network without the need for post-processing and provides high robustness and computational efficiency in clinical settings. We applied this scheme to a dataset of 463 non-contrast CT scans obtained from 30- to 45-year-old-women in Japan. The density of breast tissue in each CT scan was assigned to one of four categories (glandular tissue within the breast <25%, 25%-50%, 50%-75%, and >75%) by a radiologist as ground truth. We used 405 CT scans for training a deep CNN and the remaining 58 CT scans for testing the performance. The experimental results demonstrated that the findings of the proposed approach and those of the radiologist were the same in 72% of the CT scans among the training samples and 76% among the testing samples. These results demonstrate the potential use of deep CNN for assessing breast tissue density in non-contrast 3D CT images.

  1. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  2. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  3. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images

    NASA Astrophysics Data System (ADS)

    Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.

    2013-11-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  4. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.

    PubMed

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2013-11-21

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  5. Classification of micro-CT images using 3D characterization of bone canal patterns in human osteogenesis imperfecta

    NASA Astrophysics Data System (ADS)

    Abidin, Anas Z.; Jameson, John; Molthen, Robert; Wismüller, Axel

    2017-03-01

    Few studies have analyzed the microstructural properties of bone in cases of Osteogenenis Imperfecta (OI), or `brittle bone disease'. Current approaches mainly focus on bone mineral density measurements as an indirect indicator of bone strength and quality. It has been shown that bone strength would depend not only on composition but also structural organization. This study aims to characterize 3D structure of the cortical bone in high-resolution micro CT images. A total of 40 bone fragments from 28 subjects (13 with OI and 15 healthy controls) were imaged using micro tomography using a synchrotron light source (SRµCT). Minkowski functionals - volume, surface, curvature, and Euler characteristics - describing the topological organization of the bone were computed from the images. The features were used in a machine learning task to classify between healthy and OI bone. The best classification performance (mean AUC - 0.96) was achieved with a combined 4-dimensional feature of all Minkowski functionals. Individually, the best feature performance was seen using curvature (mean AUC - 0.85), which characterizes the edges within a binary object. These results show that quantitative analysis of cortical bone microstructure, in a computer-aided diagnostics framework, can be used to distinguish between healthy and OI bone with high accuracy.

  6. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  7. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration.

    PubMed

    Kim, Jinkoo; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  8. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration

    SciTech Connect

    Kim, Jinkoo; Yin Fangfang; Zhao Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  9. 3D imaging of lung tissue by confocal microscopy and micro-CT

    NASA Astrophysics Data System (ADS)

    Kriete, Andres; Breithecker, Andreas; Rau, Wigbert D.

    2001-07-01

    Two complementary techniques for the imaging of tissue subunits are discussed. A computer guided light microscopic imaging technique is described first, which confocally resolves thick serial sections axially. The lateral area of interest is increased by scanning a mosaic of images in each plane. Subsequently, all images are fused digitally to form a highly resolved volume exhibiting the fine structure of complete respiratory units of lung. A different technique described is based on microtomography. This method allows to image volumes up to 3x3x3 cm at a resolution of up to 7 microns. Due to the lack of strong density differences, a contrast enhancement procedure is introduced which makes this technique applicable for the imaging of lung tissue. Imaging, visualization and analysis described here are parts of an ongoing project to model structure and to simulate function of tissue subunits and complete organs.

  10. Automated incision line determination for virtual unfolded view generation of the stomach from 3D abdominal CT images

    NASA Astrophysics Data System (ADS)

    Suito, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Iinuma, Gen; Misawa, Kazunari; Nawano, Shigeru; Mori, Kensaku

    2012-03-01

    In this paper, we propose an automated incision line determination method for virtual unfolded view generation of the stomach from 3D abdominal CT images. The previous virtual unfolding methods of the stomach required a lot of manual operations such as determination of the incision line, which heavily tasks an operator. In general, an incision line along the greater curvature of the stomach is used for making pathological specimen. In our method, an incision line is automatically determined by projecting a centerline of the stomach onto the gastric surface from a projection line. The projection line is determined by using positions of the cardia and the pylorus, that can be easily specified by two mouse clicks. The process of our method is performed as follows. We extract the stomach region using a thresholding and a labeling processes. We apply a thinning process to the stomach region, and then we extract the longest line from the result of the thinning process. We obtain a centerline of the stomach region by smoothing the longest line by using a Bezier curve. The incision line is calculated by projecting the centerline onto the gastric surface from the projection line. We applied the proposed method to 19 cases of CT images. We automatically determined incision lines. Experimintal results showed our method was able to determine incision lines along the greater curvature for most of 19 cases.

  11. Prior Image Constrained Compressed Sensing Metal Artifact Reduction (PICCS-MAR): 2D and 3D Image Quality Improvement with Hip Prostheses at CT Colonography

    PubMed Central

    Bannas, Peter; Li, Yinsheng; Motosugi, Utaroh; Li, Ke; Lubner, Meghan; Chen, Guang-Hong; Pickhardt, Perry J.

    2015-01-01

    Purpose To assess the effect of the prior-image-constrained-compressed-sensing based metal-artefactreduction (PICCS-MAR) algorithm on streak artefact reduction and 2D and 3D-image quality improvement in patients with total hip arthroplasty (THA) undergoing CT colonography (CTC). Material and Methods PICCS-MAR was applied to filtered-back-projection (FBP)-reconstructed DICOM CTC-images in 52 patients with THA (unilateral, n=30; bilateral, n=22). For FBP and PICCS-MAR series, ROI-measurements of CT-numbers were obtained at predefined levels for fat, muscle, air, and the most severe artefact. Two radiologists independently reviewed 2D and 3D CTC-images and graded artefacts and image quality using a five-point-scale (1=severe streak/no-diagnostic confidence, 5=no streak/excellent image-quality, high-confidence). Results were compared using paired and unpaired t-tests, Wilcoxon signed-ranks and Mann-Whitney-tests. Results Streak artefacts and image quality scores for FBP versus PICCS-MAR 2D-images (median: 1 vs. 3 and 2 vs. 3, respectively) and 3D images (median: 2 vs. 4 and 3 vs. 4, respectively) showed significant improvement after PICCS-MAR (all P<.001). PICCS-MAR significantly improved the accuracy of mean CT numbers for fat, muscle and the area with the most severe artefact (all P<.001). Conclusion PICCS-MAR substantially reduces streak artefacts related to THA on DICOM images, thereby enhancing visualization of anatomy on 2D and 3D CTC images and increasing diagnostic confidence. PMID:26521266

  12. Prior Image Constrained Compressed Sensing Metal Artifact Reduction (PICCS-MAR): 2D and 3D Image Quality Improvement with Hip Prostheses at CT Colonography.

    PubMed

    Bannas, Peter; Li, Yinsheng; Motosugi, Utaroh; Li, Ke; Lubner, Meghan; Chen, Guang-Hong; Pickhardt, Perry J

    2016-07-01

    To assess the effect of the prior-image-constrained-compressed-sensing-based metal-artefact-reduction (PICCS-MAR) algorithm on streak artefact reduction and 2D and 3D-image quality improvement in patients with total hip arthroplasty (THA) undergoing CT colonography (CTC). PICCS-MAR was applied to filtered-back-projection (FBP)-reconstructed DICOM CTC-images in 52 patients with THA (unilateral, n = 30; bilateral, n = 22). For FBP and PICCS-MAR series, ROI-measurements of CT-numbers were obtained at predefined levels for fat, muscle, air, and the most severe artefact. Two radiologists independently reviewed 2D and 3D CTC-images and graded artefacts and image quality using a five-point-scale (1 = severe streak/no-diagnostic confidence, 5 = no streak/excellent image-quality, high-confidence). Results were compared using paired and unpaired t-tests and Wilcoxon signed-rank and Mann-Whitney-tests. Streak artefacts and image quality scores for FBP versus PICCS-MAR 2D-images (median: 1 vs. 3 and 2 vs. 3, respectively) and 3D images (median: 2 vs. 4 and 3 vs. 4, respectively) showed significant improvement after PICCS-MAR (all P < 0.001). PICCS-MAR significantly improved the accuracy of mean CT numbers for fat, muscle and the area with the most severe artefact (all P < 0.001). PICCS-MAR substantially reduces streak artefacts related to THA on DICOM images, thereby enhancing visualization of anatomy on 2D and 3D CTC images and increasing diagnostic confidence. • PICCS-MAR significantly reduces streak artefacts associated with total hip arthroplasty on 2D and 3D CTC. • PICCS-MAR significantly improves 2D and 3D CTC image quality and diagnostic confidence. • PICCS-MAR can be applied retrospectively to DICOM images from single-kVp CT.

  13. Three-dimensional image technology in forensic anthropology: Assessing the validity of biological profiles derived from CT-3D images of the skeleton

    NASA Astrophysics Data System (ADS)

    Garcia de Leon Valenzuela, Maria Julia

    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility

  14. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    PubMed

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-07-21

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  15. Porosity imaged by a vector projection algorithm correlates with fractal dimension measured on 3D models obtained by microCT.

    PubMed

    Chappard, Daniel; Stancu, Izabela-Cristina

    2015-04-01

    Porosity is an important factor to consider in a large variety of materials. Porosity can be visualized in bone or 3D synthetic biomaterials by microcomputed tomography (microCT). Blocks of porous poly(2-hydroxyethyl methacrylate) were prepared with polystyrene beads of different diameter (500, 850, 1160 and 1560 μm) and analysed by microCT. On each 2D binarized microCT section, pixels of the pores which belong to the same image column received the same pseudo-colour according to a look up table. The same colour was applied on the same column of a frontal plane image which was constructed line by line from all images of the microCT stack. The fractal dimension Df of the frontal plane image was measured as well as the descriptors of the 3D models (porosity, 3D fractal dimension D3D, thickness, density and separation of material walls. Porosity, thickness Df and D3D increased with the size of the porogen beads. A linear correlation was observed between Df and D3D. This method provides quantitative and qualitative analysis of porosity on a single frontal plane image of a porous object. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  16. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2011-03-01

    Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.

  17. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  18. Development and application of local 3-D x-ray CT reconstruction software for imaging critical regions in large ceramic turbine rotors

    SciTech Connect

    Sivers, E.A.; Holloway, D.L.; Ellingson, W.A.; Ling, J.

    1992-12-31

    Current 3-D X-ray CT imaging technology is limited in some cases by the size and sensitivity of the X-ray detector. This limitation can be overcome to some degree by the use of region-of-interest (ROI) reconstruction software when only part of a larger object need be examined. However, images produced from ROI data often exhibit severe density shading if they are reconstructed by unaltered 3-D X-ray CT algorithms (called Global methods here). These density artifacts can be so severe that low-contrast features are hidden. Time-consuming methods introduced previously to remedy these artifacts require specialized processing to replace or approximate the missing data outside the desired volume. Although these methods are required for true densitometry measurements, in many NDT applications only the detection of internal features or relative density variations is required. In such cases, the use of Local (or Lamda) X-ray CT, which produces an ``edge-enhanced`` reconstruction and requires only minor modifications of the standard 3-D X-ray CT algorithm, is recommended. Since the primary difference between Global and Local CT concerns the design of the convolution filter, two versions of a Local CT fitter are discussed here. These two filters are used in a Local CT implementation to reconstruct 3D X-ray CT data. For comparison, Global CT using the Shepp-Logan variation of the fan-beam convolution fitter is used to reconstruct the same data. This comparison shows the relative merits of Local and Global CT for fairly noisy scans of large, green Si{sub 3}N{sub 4} pressure-slip-cast parts. The Feldkamp modification of fan-beam CT reconstruction is used in the reconstructions. In each case, real-number, reconstructed images are scaled linearly to optimize the available grey-scale levels in the images presented here.

  19. Automated torso organ segmentation from 3D CT images using conditional random field

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.

  20. Improving image-guided radiation therapy of lung cancer by reconstructing 4D-CT from a single free-breathing 3D-CT on the treatment day.

    PubMed

    Wu, Guorong; Lian, Jun; Shen, Dinggang

    2012-12-01

    One of the major challenges of lung cancer radiation therapy is how to reduce the margin of treatment field but also manage geometric uncertainty from respiratory motion. To this end, 4D-CT imaging has been widely used for treatment planning by providing a full range of respiratory motion for both tumor and normal structures. However, due to the considerable radiation dose and the limit of resource and time, typically only a free-breathing 3D-CT image is acquired on the treatment day for image-guided patient setup, which is often determined by the image fusion of the free-breathing treatment and planning day 3D-CT images. Since individual slices of two free breathing 3D-CTs are possibly acquired at different phases, two 3D-CTs often look different, which makes the image registration very challenging. This uncertainty of pretreatment patient setup requires a generous margin of radiation field in order to cover the tumor sufficiently during the treatment. In order to solve this problem, our main idea is to reconstruct the 4D-CT (with full range of tumor motion) from a single free-breathing 3D-CT acquired on the treatment day. We first build a super-resolution 4D-CT model from a low-resolution 4D-CT on the planning day, with the temporal correspondences also established across respiratory phases. Next, we propose a 4D-to-3D image registration method to warp the 4D-CT model to the treatment day 3D-CT while also accommodating the new motion detected on the treatment day 3D-CT. In this way, we can more precisely localize the moving tumor on the treatment day. Specifically, since the free-breathing 3D-CT is actually the mixed-phase image where different slices are often acquired at different respiratory phases, we first determine the optimal phase for each local image patch in the free-breathing 3D-CT to obtain a sequence of partial 3D-CT images (with incomplete image data at each phase) for the treatment day. Then we reconstruct a new 4D-CT for the treatment day by

  1. New 3D Bolton standards: coregistration of biplane x rays and 3D CT

    NASA Astrophysics Data System (ADS)

    Dean, David; Subramanyan, Krishna; Kim, Eun-Kyung

    1997-04-01

    The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.

  2. Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2017-02-01

    We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.

  3. Multimodal-3D imaging based on μMRI and μCT techniques bridges the gap with histology in visualization of the bone regeneration process.

    PubMed

    Sinibaldi, R; Conti, A; Sinjari, B; Spadone, S; Pecci, R; Palombo, M; Komlev, V S; Ortore, M G; Tromba, G; Capuani, S; De Luca, F; Caputi, S; Traini, T; Della Penna, S

    2017-06-07

    Bone repair/regeneration is usually investigated through x-ray computed microtomography (μCT) supported by histology of extracted samples, to analyze biomaterial structure and new bone formation processes. Magnetic Resonance Imaging (μMRI) shows a richer tissue contrast than μCT, despite at lower resolution, and could be combined with μCT in the perspective of conducting non-destructive 3D investigations of bone. A pipeline designed to combine μMRI and μCT images of bone samples is here described and applied on samples of extracted human jawbone core following bone graft. We optimized the co-registration procedure between μCT and μMRI images to avoid bias due to the different resolutions and contrasts. Furthermore, we used an Adaptive Multivariate Clustering, grouping homologous voxels in the co-registered images, to visualize different tissue types within a fused 3D metastructure. The tissue grouping matched the 2D histology applied only on one slice, thus extending the histology labelling in 3D. Specifically, in all samples we could separate and map two types of regenerated bone, calcified tissue, soft tissues and/or fat and marrow space. Remarkably, μMRI and μCT alone were not able to separate the two types of regenerated bone. Finally, we computed volumes of each tissue in the 3D metastructures, which might be exploited by quantitative simulation. The 3D metastructure obtained through our pipeline represents a first step to bridge the gap between the quality of information obtained from 2D optical microscopy and the 3D mapping of the bone tissue heterogeneity, and could allow researchers and clinicians to non-destructively characterize and follow-up bone regeneration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-02-01

    Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

  5. Volume of myocardium perfused by coronary artery branches as estimated from 3D micro-CT images of rat hearts

    NASA Astrophysics Data System (ADS)

    Lund, Patricia E.; Naessens, Lauren C.; Seaman, Catherine A.; Reyes, Denise A.; Ritman, Erik L.

    2000-04-01

    Average myocardial perfusion is remarkably consistent throughout the heart wall under resting conditions and the velocity of blood flow is fairly reproducible from artery to artery. Based on these observations, and the fact that flow through an artery is the product of arterial cross-sectional area and blood flow velocity, we would expect the volume of myocardium perfused to be proportional to the cross-sectional area of the coronary artery perfusing that volume of myocardium. This relationship has been confirmed by others in pigs, dogs and humans. To test the body size-dependence of this relationship we used the hearts from rats, 3 through 25 weeks of age. The coronary arteries were infused with radiopaque microfil polymer and the hearts scanned in a micro- CT scanner. Using these 3D images we measured the volume of myocardium and the arterial cross-sectional area of the artery that perfused that volume of myocardium. The average constant of proportionality was found to be 0.15 +/- 0.08 cm3/mm2. Our data showed no statistically different estimates of the constant of proportionality in the rat hearts of different ages nor between the left and right coronary arteries. This constant is smaller than that observed in large animals and humans, but this difference is consistent with the body mass-dependence on metabolic rate.

  6. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    NASA Astrophysics Data System (ADS)

    Nam, Woo Hyun; Kang, Dong-Goo; Lee, Duhgoon; Lee, Jae Young; Ra, Jong Beom

    2012-01-01

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  7. A novel 3D graph cut based co-segmentation of lung tumor on PET-CT images with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Yu, Kai; Chen, Xinjian; Shi, Fei; Zhu, Weifang; Zhang, Bin; Xiang, Dehui

    2016-03-01

    Positron Emission Tomography (PET) and Computed Tomography (CT) have been widely used in clinical practice for radiation therapy. Most existing methods only used one image modality, either PET or CT, which suffers from the low spatial resolution in PET or low contrast in CT. In this paper, a novel 3D graph cut method is proposed, which integrated Gaussian Mixture Models (GMMs) into the graph cut method. We also employed the random walk method as an initialization step to provide object seeds for the improvement of the graph cut based segmentation on PET and CT images. The constructed graph consists of two sub-graphs and a special link between the sub-graphs which penalize the difference segmentation between the two modalities. Finally, the segmentation problem is solved by the max-flow/min-cut method. The proposed method was tested on 20 patients' PET-CT images, and the experimental results demonstrated the accuracy and efficiency of the proposed algorithm.

  8. Automatic organ localizations on 3D CT images by using majority-voting of multiple 2D detections based on local binary patterns and Haar-like features

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamaguchi, Shoutarou; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-02-01

    This paper describes an approach to accomplish the fast and automatic localization of the different inner organ regions on 3D CT scans. The proposed approach combines object detections and the majority voting technique to achieve the robust and quick organ localization. The basic idea of proposed method is to detect a number of 2D partial appearances of a 3D target region on CT images from multiple body directions, on multiple image scales, by using multiple feature spaces, and vote all the 2D detecting results back to the 3D image space to statistically decide one 3D bounding rectangle of the target organ. Ensemble learning was used to train the multiple 2D detectors based on template matching on local binary patterns and Haar-like feature spaces. A collaborative voting was used to decide the corner coordinates of the 3D bounding rectangle of the target organ region based on the coordinate histograms from detection results in three body directions. Since the architecture of the proposed method (multiple independent detections connected to a majority voting) naturally fits the parallel computing paradigm and multi-core CPU hardware, the proposed algorithm was easy to achieve a high computational efficiently for the organ localizations on a whole body CT scan by using general-purpose computers. We applied this approach to localization of 12 kinds of major organ regions independently on 1,300 torso CT scans. In our experiments, we randomly selected 300 CT scans (with human indicated organ and tissue locations) for training, and then, applied the proposed approach with the training results to localize each of the target regions on the other 1,000 CT scans for the performance testing. The experimental results showed the possibility of the proposed approach to automatically locate different kinds of organs on the whole body CT scans.

  9. Detecting Radiation-Induced Injury Using Rapid 3D Variogram Analysis of CT Images of Rat Lungs

    PubMed Central

    Jacob, Richard E.; Murphy, Mark K.; Creim, Jeffrey A.; Carson, James P.

    2014-01-01

    Rationale and Objectives To investigate the ability of variogram analysis of octree-decomposed CT images and volume change maps to detect radiation-induced damage in rat lungs. Materials and Methods The lungs of female Sprague-Dawley rats were exposed to one of five absorbed doses (0, 6, 9, 12, or 15 Gy) of gamma radiation from a Co-60 source. At 6 months post-exposure, pulmonary function tests were performed and 4DCT images were acquired using a respiratory-gated microCT scanner. Volume change maps were then calculated from the 4DCT images. Octree decomposition was performed on CT images and volume change maps, and variogram analysis was applied to the decomposed images. Correlations of measured parameters with dose were evaluated. Results The effects of irradiation were not detectable from measured parameters, indicating only mild lung damage. Additionally, there were no significant correlations of pulmonary function results or CT densitometry with radiation dose. However, the variogram analysis did detect a significant correlation with dose in both the CT images (r=−0.57, p=0.003) and the volume change maps (r=−0.53, p=0.008). Conclusion This is the first study to utilize variogram analysis of lung images to assess pulmonary damage in a model of radiation injury. Results show that this approach is more sensitive to detecting radiation damage than conventional measures such as pulmonary function tests or CT densitometry. PMID:24029058

  10. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    NASA Astrophysics Data System (ADS)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  11. Iodine and freeze-drying enhanced high-resolution MicroCT imaging for reconstructing 3D intraneural topography of human peripheral nerve fascicles.

    PubMed

    Yan, Liwei; Guo, Yongze; Qi, Jian; Zhu, Qingtang; Gu, Liqiang; Zheng, Canbin; Lin, Tao; Lu, Yutong; Zeng, Zitao; Yu, Sha; Zhu, Shuang; Zhou, Xiang; Zhang, Xi; Du, Yunfei; Yao, Zhi; Lu, Yao; Liu, Xiaolin

    2017-08-01

    The precise annotation and accurate identification of the topography of fascicles to the end organs are prerequisites for studying human peripheral nerves. In this study, we present a feasible imaging method that acquires 3D high-resolution (HR) topography of peripheral nerve fascicles using an iodine and freeze-drying (IFD) micro-computed tomography (microCT) method to greatly increase the contrast of fascicle images. The enhanced microCT imaging method can facilitate the reconstruction of high-contrast HR fascicle images, fascicle segmentation and extraction, feature analysis, and the tracing of fascicle topography to end organs, which define fascicle functions. The complex intraneural aggregation and distribution of fascicles is typically assessed using histological techniques or MR imaging to acquire coarse axial three-dimensional (3D) maps. However, the disadvantages of histological techniques (static, axial manual registration, and data instability) and MR imaging (low-resolution) limit these applications in reconstructing the topography of nerve fascicles. Thus, enhanced microCT is a new technique for acquiring 3D intraneural topography of the human peripheral nerve fascicles both to improve our understanding of neurobiological principles and to guide accurate repair in the clinic. Additionally, 3D microstructure data can be used as a biofabrication model, which in turn can be used to fabricate scaffolds to repair long nerve gaps. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The Impact of Different Levels of Adaptive Iterative Dose Reduction 3D on Image Quality of 320-Row Coronary CT Angiography: A Clinical Trial

    PubMed Central

    Feger, Sarah; Rief, Matthias; Zimmermann, Elke; Martus, Peter; Schuijf, Joanne Désirée; Blobel, Jörg; Richter, Felicitas; Dewey, Marc

    2015-01-01

    Purpose The aim of this study was the systematic image quality evaluation of coronary CT angiography (CTA), reconstructed with the 3 different levels of adaptive iterative dose reduction (AIDR 3D) and compared to filtered back projection (FBP) with quantum denoising software (QDS). Methods Standard-dose CTA raw data of 30 patients with mean radiation dose of 3.2 ± 2.6 mSv were reconstructed using AIDR 3D mild, standard, strong and compared to FBP/QDS. Objective image quality comparison (signal, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), contour sharpness) was performed using 21 measurement points per patient, including measurements in each coronary artery from proximal to distal. Results Objective image quality parameters improved with increasing levels of AIDR 3D. Noise was lowest in AIDR 3D strong (p≤0.001 at 20/21 measurement points; compared with FBP/QDS). Signal and contour sharpness analysis showed no significant difference between the reconstruction algorithms for most measurement points. Best coronary SNR and CNR were achieved with AIDR 3D strong. No loss of SNR or CNR in distal segments was seen with AIDR 3D as compared to FBP. Conclusions On standard-dose coronary CTA images, AIDR 3D strong showed higher objective image quality than FBP/QDS without reducing contour sharpness. Trial Registration Clinicaltrials.gov NCT00967876 PMID:25945924

  13. The effect of spatial micro-CT image resolution and surface complexity on the morphological 3D analysis of open porous structures

    SciTech Connect

    Pyka, Grzegorz; Kerckhofs, Greet

    2014-01-15

    In material science microfocus X-ray computed tomography (micro-CT) is one of the most popular non-destructive techniques to visualise and quantify the internal structure of materials in 3D. Despite constant system improvements, state-of-the-art micro-CT images can still hold several artefacts typical for X-ray CT imaging that hinder further image-based processing, structural and quantitative analysis. For example spatial resolution is crucial for an appropriate characterisation as the voxel size essentially influences the partial volume effect. However, defining the adequate image resolution is not a trivial aspect and understanding the correlation between scan parameters like voxel size and the structural properties is crucial for comprehensive material characterisation using micro-CT. Therefore, the objective of this study was to evaluate the influence of the spatial image resolution on the micro-CT based morphological analysis of three-dimensional (3D) open porous structures with a high surface complexity. In particular the correlation between the local surface properties and the accuracy of the micro-CT-based macro-morphology of 3D open porous Ti6Al4V structures produced by selective laser melting (SLM) was targeted and revealed for rough surfaces a strong dependence of the resulting structure characteristics on the scan resolution. Reducing the surface complexity by chemical etching decreased the sensitivity of the overall morphological analysis to the spatial image resolution and increased the detection limit. This study showed that scan settings and image processing parameters need to be customized to the material properties, morphological parameters under investigation and the desired final characteristics (in relation to the intended functional use). Customization of the scan resolution can increase the reliability of the micro-CT based analysis and at the same time reduce its operating costs. - Highlights: • We examine influence of the image resolution

  14. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    SciTech Connect

    Fallahpoor, M; Abbasi, M; Sen, A; Parach, A; Kalantari, F

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  15. Correlative 3D-imaging of Pipistrellus penis micromorphology: Validating quantitative microCT images with undecalcified serial ground section histomorphology.

    PubMed

    Herdina, Anna Nele; Plenk, Hanns; Benda, Petr; Lina, Peter H C; Herzig-Straschil, Barbara; Hilgers, Helge; Metscher, Brian D

    2015-06-01

    Detailed knowledge of histomorphology is a prerequisite for the understanding of function, variation, and development. In bats, as in other mammals, penis and baculum morphology are important in species discrimination and phylogenetic studies. In this study, nondestructive 3D-microtomographic (microCT, µCT) images of bacula and iodine-stained penes of Pipistrellus pipistrellus were correlated with light microscopic images from undecalcified surface-stained ground sections of three of these penes of P. pipistrellus (1 juvenile). The results were then compared with µCT-images of bacula of P. pygmaeus, P. hanaki, and P. nathusii. The Y-shaped baculum in all studied Pipistrellus species has a proximal base with two club-shaped branches, a long slender shaft, and a forked distal tip. The branches contain a medullary cavity of variable size, which tapers into a central canal of variable length in the proximal baculum shaft. Both are surrounded by a lamellar and a woven bone layer and contain fatty marrow and blood vessels. The distal shaft consists of woven bone only, without a vascular canal. The proximal ends of the branches are connected with the tunica albuginea of the corpora cavernosa via entheses. In the penis shaft, the corpus spongiosum-surrounded urethra lies in a ventral grove of the corpora cavernosa, and continues in the glans under the baculum. The glans penis predominantly comprises an enlarged corpus spongiosum, which surrounds urethra and baculum. In the 12 studied juvenile and subadult P. pipistrellus specimens the proximal branches of the baculum were shorter and without marrow cavity, while shaft and distal tip appeared already fully developed. The present combination with light microscopic images from one species enabled a more reliable interpretation of histomorphological structures in the µCT-images from all four Pipistrellus species. © 2015 Wiley Periodicals, Inc.

  16. Evaluation of the combined effects of target size, respiratory motion and background activity on 3D and 4D PET/CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang-June; Ionascu, Dan; Killoran, Joseph; Mamede, Marcelo; Gerbaudo, Victor H.; Chin, Lee; Berbeco, Ross

    2008-07-01

    Gated (4D) PET/CT has the potential to greatly improve the accuracy of radiotherapy at treatment sites where internal organ motion is significant. However, the best methodology for applying 4D-PET/CT to target definition is not currently well established. With the goal of better understanding how to best apply 4D information to radiotherapy, initial studies were performed to investigate the effect of target size, respiratory motion and target-to-background activity concentration ratio (TBR) on 3D (ungated) and 4D PET images. Using a PET/CT scanner with 4D or gating capability, a full 3D-PET scan corrected with a 3D attenuation map from 3D-CT scan and a respiratory gated (4D) PET scan corrected with corresponding attenuation maps from 4D-CT were performed by imaging spherical targets (0.5-26.5 mL) filled with 18F-FDG in a dynamic thorax phantom and NEMA IEC body phantom at different TBRs (infinite, 8 and 4). To simulate respiratory motion, the phantoms were driven sinusoidally in the superior-inferior direction with amplitudes of 0, 1 and 2 cm and a period of 4.5 s. Recovery coefficients were determined on PET images. In addition, gating methods using different numbers of gating bins (1-20 bins) were evaluated with image noise and temporal resolution. For evaluation, volume recovery coefficient, signal-to-noise ratio and contrast-to-noise ratio were calculated as a function of the number of gating bins. Moreover, the optimum thresholds which give accurate moving target volumes were obtained for 3D and 4D images. The partial volume effect and signal loss in the 3D-PET images due to the limited PET resolution and the respiratory motion, respectively were measured. The results show that signal loss depends on both the amplitude and pattern of respiratory motion. However, the 4D-PET successfully recovers most of the loss induced by the respiratory motion. The 5-bin gating method gives the best temporal resolution with acceptable image noise. The results based on the 4D

  17. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images.

    PubMed

    Jonić, S; Thévenaz, P; Zheng, G; Nolte, L-P; Unser, M

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  18. Detecting Radiation-Induced Injury Using Rapid 3D Variogram Analysis of CT Images of Rat Lungs

    SciTech Connect

    Jacob, Rick E.; Murphy, Mark K.; Creim, Jeffrey A.; Carson, James P.

    2013-10-01

    A new heterogeneity analysis approach to discern radiation-induced lung damage was tested on CT images of irradiated rats. The method, combining octree decomposition with variogram analysis, demonstrated a significant correlation with radiation exposure levels, whereas conventional measurements and pulmonary function tests did not. The results suggest the new approach may be highly sensitive for assessing even subtle radiation-induced changes

  19. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  20. Determination of 3D location and rotation of lumbar vertebrae in CT images by symmetry-based auto-registration

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Likar, Boštjan; Pernuš, Franjo

    2007-03-01

    Quantitative measurement of vertebral rotation is important in surgical planning, analysis of surgical results, and monitoring of the progression of spinal deformities. However, many established and newly developed techniques for measuring axial vertebral rotation do not exploit three-dimensional (3D) information, which may result in virtual axial rotation because of the sagittal and coronal rotation of vertebrae. We propose a novel automatic approach to the measurement of the location and rotation of vertebrae in 3D without prior volume reformation, identification of appropriate cross-sections or aid by statistical models. The vertebra under investigation is encompassed by a mask in the form of an elliptical cylinder in 3D, defined by its center of rotation and the rotation angles. We exploit the natural symmetry of the vertebral body, vertebral column and vertebral canal by dividing the vertebral mask by its mid-axial, mid-sagittal and mid-coronal plane, so that the obtained volume pairs contain symmetrical parts of the observed anatomy. Mirror volume pairs are then simultaneously registered to each other by robust rigid auto-registration, using the weighted sum of absolute differences between the intensities of the corresponding volume pairs as the similarity measure. The method was evaluated on 50 lumbar vertebrae from normal and scoliotic computed tomography (CT) spinal scans, showing relatively large capture ranges and distinctive maxima at the correct locations and rotation angles. The proposed method may aid the measurement of the dimensions of vertebral pedicles, foraminae and canal, and may be a valuable tool for clinical evaluation of the spinal deformities in 3D.

  1. Cardiac image reconstruction on a 16-slice CT scanner using a retrospectively ECG-gated multicycle 3D back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Shechter, Gilad; Naveh, Galit; Altman, Ami; Proksa, Roland M.; Grass, Michael

    2003-05-01

    Fast 16-slice spiral CT delivers superior cardiac visualization in comparison to older generation 2- to 8-slice scanners due to the combination of high temporal resolution along with isotropic spatial resolution and large coverage. The large beam opening of such scanners necessitates the use of adequate algorithms to avoid cone beam artifacts. We have developed a multi-cycle phase selective 3D back projection reconstruction algorithm that provides excellent temporal and spatial resolution for 16-slice CT cardiac images free of cone beam artifacts.

  2. MicroCT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues.

    PubMed

    Metscher, Brian D

    2009-06-22

    Comparative, functional, and developmental studies of animal morphology require accurate visualization of three-dimensional structures, but few widely applicable methods exist for non-destructive whole-volume imaging of animal tissues. Quantitative studies in particular require accurately aligned and calibrated volume images of animal structures. X-ray microtomography (microCT) has the potential to produce quantitative 3D images of small biological samples, but its widespread use for non-mineralized tissues has been limited by the low x-ray contrast of soft tissues. Although osmium staining and a few other techniques have been used for contrast enhancement, generally useful methods for microCT imaging for comparative morphology are still lacking. Several very simple and versatile staining methods are presented for microCT imaging of animal soft tissues, along with advice on tissue fixation and sample preparation. The stains, based on inorganic iodine and phosphotungstic acid, are easier to handle and much less toxic than osmium, and they produce high-contrast x-ray images of a wide variety of soft tissues. The breadth of possible applications is illustrated with a few microCT images of model and non-model animals, including volume and section images of vertebrates, embryos, insects, and other invertebrates. Each image dataset contains x-ray absorbance values for every point in the imaged volume, and objects as small as individual muscle fibers and single blood cells can be resolved in their original locations and orientations within the sample. With very simple contrast staining, microCT imaging can produce quantitative, high-resolution, high-contrast volume images of animal soft tissues, without destroying the specimens and with possibilities of combining with other preparation and imaging methods. Such images are expected to be useful in comparative, developmental, functional, and quantitative studies of morphology.

  3. 3-D threat image projection

    NASA Astrophysics Data System (ADS)

    Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

    2008-02-01

    Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat

  4. SU-E-T-296: Dosimetric Analysis of Small Animal Image-Guided Irradiator Using High Resolution Optical CT Imaging of 3D Dosimeters

    SciTech Connect

    Na, Y; Qian, X; Wuu, C; Adamovics, J

    2015-06-15

    Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGE dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can be

  5. Ct3d: tracking microglia motility in 3D using a novel cosegmentation approach.

    PubMed

    Xiao, Hang; Li, Ying; Du, Jiulin; Mosig, Axel

    2011-02-15

    Cell tracking is an important method to quantitatively analyze time-lapse microscopy data. While numerous methods and tools exist for tracking cells in 2D time-lapse images, only few and very application-specific tracking tools are available for 3D time-lapse images, which is of high relevance in immunoimaging, in particular for studying the motility of microglia in vivo. We introduce a novel algorithm for tracking cells in 3D time-lapse microscopy data, based on computing cosegmentations between component trees representing individual time frames using the so-called tree-assignments. For the first time, our method allows to track microglia in three dimensional confocal time-lapse microscopy images. We also evaluate our method on synthetically generated data, demonstrating that our algorithm is robust even in the presence of different types of inhomogeneous background noise. Our algorithm is implemented in the ct3d package, which is available under http://www.picb.ac.cn/patterns/Software/ct3d; supplementary videos are available from http://www.picb.ac.cn/patterns/Supplements/ct3d.

  6. Development of a 3D CT scanner using cone beam

    NASA Astrophysics Data System (ADS)

    Endo, Masahiro; Kamagata, Nozomu; Sato, Kazumasa; Hattori, Yuichi; Kobayashi, Shigeo; Mizuno, Shinichi; Jimbo, Masao; Kusakabe, Masahiro

    1995-05-01

    In order to acquire 3D data of high contrast objects such as bone, lung and vessels enhanced by contrast media for use in 3D image processing, we have developed a 3D CT-scanner using cone beam x ray. The 3D CT-scanner consists of a gantry and a patient couch. The gantry consists of an x-ray tube designed for cone beam CT and a large area two-dimensional detector mounted on a single frame and rotated around an object in 12 seconds. The large area detector consists of a fluorescent plate and a charge coupled device video camera. The size of detection area was 600 mm X 450 mm capable of covering the total chest. While an x-ray tube was rotated around an object, pulsed x ray was exposed 30 times a second and 360 projected images were collected in a 12 second scan. A 256 X 256 X 256 matrix image (1.25 mm X 1.25 mm X 1.25 mm voxel) was reconstructed by a high-speed reconstruction engine. Reconstruction time was approximately 6 minutes. Cylindrical water phantoms, anesthetized rabbits with or without contrast media, and a Japanese macaque were scanned with the 3D CT-scanner. The results seem promising because they show high spatial resolution in three directions, though there existed several point to be improved. Possible improvements are discussed.

  7. Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.

    PubMed

    Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M

    2015-10-01

    Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.

  8. Imaging the Aqueous Humor Outflow Pathway in Human Eyes by Three-dimensional Micro-computed Tomography (3D micro-CT)

    SciTech Connect

    C Hann; M Bentley; A Vercnocke; E Ritman; M Fautsch

    2011-12-31

    The site of outflow resistance leading to elevated intraocular pressure in primary open-angle glaucoma is believed to be located in the region of Schlemm's canal inner wall endothelium, its basement membrane and the adjacent juxtacanalicular tissue. Evidence also suggests collector channels and intrascleral vessels may have a role in intraocular pressure in both normal and glaucoma eyes. Traditional imaging modalities limit the ability to view both proximal and distal portions of the trabecular outflow pathway as a single unit. In this study, we examined the effectiveness of three-dimensional micro-computed tomography (3D micro-CT) as a potential method to view the trabecular outflow pathway. Two normal human eyes were used: one immersion fixed in 4% paraformaldehyde and one with anterior chamber perfusion at 10 mmHg followed by perfusion fixation in 4% paraformaldehyde/2% glutaraldehyde. Both eyes were postfixed in 1% osmium tetroxide and scanned with 3D micro-CT at 2 {mu}m or 5 {mu}m voxel resolution. In the immersion fixed eye, 24 collector channels were identified with an average orifice size of 27.5 {+-} 5 {mu}m. In comparison, the perfusion fixed eye had 29 collector channels with a mean orifice size of 40.5 {+-} 13 {mu}m. Collector channels were not evenly dispersed around the circumference of the eye. There was no significant difference in the length of Schlemm's canal in the immersed versus the perfused eye (33.2 versus 35.1 mm). Structures, locations and size measurements identified by 3D micro-CT were confirmed by correlative light microscopy. These findings confirm 3D micro-CT can be used effectively for the non-invasive examination of the trabecular meshwork, Schlemm's canal, collector channels and intrascleral vasculature that comprise the distal outflow pathway. This imaging modality will be useful for non-invasive study of the role of the trabecular outflow pathway as a whole unit.

  9. Skeletal dosimetry in the MAX06 and the FAX06 phantoms for external exposure to photons based on vertebral 3D-microCT images

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Khoury, H. J.; Vieira, J. W.; Kawrakow, I.

    2006-12-01

    3D-microCT images of vertebral bodies from three different individuals have been segmented into trabecular bone, bone marrow and bone surface cells (BSC), and then introduced into the spongiosa voxels of the MAX06 and the FAX06 phantoms, in order to calculate the equivalent dose to the red bone marrow (RBM) and the BSC in the marrow cavities of trabecular bone with the EGSnrc Monte Carlo code from whole-body exposure to external photon radiation. The MAX06 and the FAX06 phantoms consist of about 150 million 1.2 mm cubic voxels each, a part of which are spongiosa voxels surrounded by cortical bone. In order to use the segmented 3D-microCT images for skeletal dosimetry, spongiosa voxels in the MAX06 and the FAX06 phantom were replaced at runtime by so-called micro matrices representing segmented trabecular bone, marrow and BSC in 17.65, 30 and 60 µm cubic voxels. The 3D-microCT image-based RBM and BSC equivalent doses for external exposure to photons presented here for the first time for complete human skeletons are in agreement with the results calculated with the three correction factor method and the fluence-to-dose response functions for the same phantoms taking into account the conceptual differences between the different methods. Additionally the microCT image-based results have been compared with corresponding data from earlier studies for other human phantoms. This article is dedicated to Prof. Dr Guenter Drexler from the Laboratório de Ciências Radiológicas, State University of Rio de Janeiro, on the occasion of his 70th birthday.

  10. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    PubMed Central

    Thévenaz, P.; Zheng, G.; Nolte, L. -P.; Unser, M.

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers. PMID:23165033

  11. MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT

    NASA Astrophysics Data System (ADS)

    Shah, Jainil; Mann, Steve D.; Tornai, Martin P.; Richmond, Michelle; Zentai, George

    2014-03-01

    The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.

  12. Fully Automatic Localization and Segmentation of 3D Vertebral Bodies from CT/MR Images via a Learning-Based Method

    PubMed Central

    Chu, Chengwen; Belavý, Daniel L.; Armbrecht, Gabriele; Bansmann, Martin; Felsenberg, Dieter; Zheng, Guoyan

    2015-01-01

    In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively. PMID:26599505

  13. Fully Automatic Localization and Segmentation of 3D Vertebral Bodies from CT/MR Images via a Learning-Based Method.

    PubMed

    Chu, Chengwen; Belavý, Daniel L; Armbrecht, Gabriele; Bansmann, Martin; Felsenberg, Dieter; Zheng, Guoyan

    2015-01-01

    In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively.

  14. Automatic segmentation of colon in 3D CT images and removal of opacified fluid using cascade feed forward neural network.

    PubMed

    Gayathri Devi, K; Radhakrishnan, R

    2015-01-01

    Colon segmentation is an essential step in the development of computer-aided diagnosis systems based on computed tomography (CT) images. The requirement for the detection of the polyps which lie on the walls of the colon is much needed in the field of medical imaging for diagnosis of colorectal cancer. The proposed work is focused on designing an efficient automatic colon segmentation algorithm from abdominal slices consisting of colons, partial volume effect, bowels, and lungs. The challenge lies in determining the exact colon enhanced with partial volume effect of the slice. In this work, adaptive thresholding technique is proposed for the segmentation of air packets, machine learning based cascade feed forward neural network enhanced with boundary detection algorithms are used which differentiate the segments of the lung and the fluids which are sediment at the side wall of colon and by rejecting bowels based on the slice difference removal method. The proposed neural network method is trained with Bayesian regulation algorithm to determine the partial volume effect. Experiment was conducted on CT database images which results in 98% accuracy and minimal error rate. The main contribution of this work is the exploitation of neural network algorithm for removal of opacified fluid to attain desired colon segmentation result.

  15. TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients

    SciTech Connect

    Yang, F; Nyflot, M; Bowen, S; Kinahan, P; Sandison, G

    2014-06-15

    Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4D PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.

  16. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  17. Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images

    NASA Astrophysics Data System (ADS)

    Price, Gareth; Moore, Chris

    2007-03-01

    Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy and correct target identification is a necessity. Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such techniques can be applied to organ at risk and tumour segmentation in radiotherapy. In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent, especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape model, provides guidance. A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes ) prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer variability.

  18. Computer-aided diagnosis: a 3D segmentation method for lung nodules in CT images by use of a spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Jiahui; Engelmann, Roger; Li, Qiang

    2008-03-01

    Lung nodule segmentation in computed tomography (CT) plays an important role in computer-aided detection, diagnosis, and quantification systems for lung cancer. In this study, we developed a simple but accurate nodule segmentation method in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. We then transformed the VOI into a two-dimensional (2D) image by use of a "spiral-scanning" technique, in which a radial line originating from the center of the VOI spirally scanned the VOI. The voxels scanned by the radial line were arranged sequentially to form a transformed 2D image. Because the surface of a nodule in 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified our segmentation method and enabled us to obtain accurate segmentation results. We employed a dynamic programming technique to delineate the "optimal" outline of a nodule in the 2D image, which was transformed back into the 3D image space to provide the interior of the nodule. The proposed segmentation method was trained on the first and was tested on the second Lung Image Database Consortium (LIDC) datasets. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric. The experimental results on the LIDC database demonstrated that our segmentation method provided relatively robust and accurate segmentation results with mean overlap values of 66% and 64% for the nodules in the first and second LIDC datasets, respectively, and would be useful for the quantification, detection, and diagnosis of lung cancer.

  19. Evaluation of z-axis resolution and image noise for nonconstant velocity spiral CT data reconstructed using a weighted 3D filtered backprojection (WFBP) reconstruction algorithm.

    PubMed

    Christner, Jodie A; Stierstorfer, Karl; Primak, Andrew N; Eusemann, Christian D; Flohr, Thomas G; McCollough, Cynthia H

    2010-02-01

    To determine the constancy of z-axis spatial resolution, CT number, image noise, and the potential for image artifacts for nonconstant velocity spiral CT data reconstructed using a flexibly weighted 3D filtered backprojection (WFBP) reconstruction algorithm. A WFBP reconstruction algorithm was used to reconstruct stationary (axial, pitch=0), constant velocity spiral (pitch = 0.35-1.5) and nonconstant velocity spiral CT data acquired using a 128 x 0.6 mm acquisition mode (38.4 mm total detector length, z-flying focal spot technique), and a gantry rotation time of 0.30 s. Nonconstant velocity scans used the system's periodic spiral mode, where the table moved in and out of the gantry in a cyclical manner. For all scan types, the volume CTDI was 10 mGy. Measurements of CT number, image noise, and the slice sensitivity profile were made for all scan types as a function of the nominal slice width, table velocity, and position within the scan field of view. A thorax phantom was scanned using all modes and reconstructed transverse and coronal plane images were compared. Negligible differences in slice thickness, CT number, noise, or artifacts were found between scan modes for data taken at two positions within the scan field of view. For nominal slices of 1.0-3.0 mm, FWHM values of the slice sensitivity profiles were essentially independent of the scan type. For periodic spiral scans, FWHM values measured at the center of the scan range were indistinguishable from those taken 5 mm from one end of the scan range. All CT numbers were within +/- 5 HU, and CT number and noise values were similar for all scan modes assessed. A slight increase in noise and artifact level was observed 5 mm from the start of the scan on the first pass of the periodic spiral. On subsequent passes, noise and artifact level in the transverse and coronal plane images were the same for all scan modes. Nonconstant velocity periodic spiral scans can achieve z-axis spatial resolution, CT number accuracy

  20. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  1. SU-F-P-32: A Phantom Study of Accuracy of Four-Dimensional Cone-Beam CT (4D-CBCT) Vs. Three-Dimensional Cone Beam CT (3D-CBCT) in Image Guided Radiotherapy

    SciTech Connect

    He, R; Morris, B; Duggar, N; Markovich, A; Standford, J; Lu, J; Yang, C

    2016-06-15

    Purpose: SymmetryTM 4D IGRT system of Elekta has been installed at our institution, which offers the 4D CBCT registration option. This study is to evaluate the accuracy of 4D CBCT system by using the CIRS 4D motion phantom and to perform a feasibility study on the implementation of 4D-CBCT as image guidance for SBRT treatment. Methods: The 3D and 4D CT image data sets are acquired using the CIRS motion phantom on a Philips large bore CT simulator. The motion was set as 0.5 cm superior and inferior directions with 6 seconds recycle time. The 4D CT data were sorted as 10 phases. One identifiable part of the 4D CT QA insert from CIRS phantom was used as the target. The ITV MIP was drawn based on maximum intensity projection (MIP) and transferred as a planning structure into 4D CBCT system. Then the 3D CBCT and 4D CBCT images were taken and registered with the free breath (3D), MIP (4D) and average intensity projection (AIP)(4D) reference data sets. The couch shifts (X, Y, Z) are recorded and compared. Results: Table 1 listed the twelve couch shifts based on the registration of MIP, AIP and free breath CT data sets with 3D CBCT and 4D CBCT for both whole body and local registration. X, Y and Z represent couch shifts in the direction of the right-left, superior-inferior and anterior-posterior. The biggest differences of 0.73 cm and 0.57 cm are noted in the free breath CT data with 4D CBCT and 3D CBCT data registration. Fig. 1 and Fig. 2 are the shift analysis in diagram. Fig. 3 shows the registration. Conclusion: Significant differences exist in the shifts corresponding with the direction of target motion. Further investigations are ongoing.

  2. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis

    PubMed Central

    Liu, Jiamin; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey; Hirsch, Bruce E.; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A.

    2008-01-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%–97% and 0.2%–0.7%. The method requires 1–2 minutes of operator time and 6–7 min of computer time per data set, which makes it significantly more efficient than live wire—the method currently available for the task that can be used routinely. PMID:18777924

  3. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.

    PubMed

    Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

    2008-08-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely.

  4. Issues involved in the quantitative 3D imaging of proton doses using optical CT and chemical dosimeters

    NASA Astrophysics Data System (ADS)

    Doran, Simon; Gorjiara, Tina; Kacperek, Andrzej; Adamovics, John; Kuncic, Zdenka; Baldock, Clive

    2015-01-01

    Dosimetry of proton beams using 3D imaging of chemical dosimeters is complicated by a variation with proton linear energy transfer (LET) of the dose-response (the so-called ‘quenching effect’). Simple theoretical arguments lead to the conclusion that the total absorbed dose from multiple irradiations with different LETs cannot be uniquely determined from post-irradiation imaging measurements on the dosimeter. Thus, a direct inversion of the imaging data is not possible and the proposition is made to use a forward model based on appropriate output from a planning system to predict the 3D response of the dosimeter. In addition to the quenching effect, it is well known that chemical dosimeters have a non-linear response at high doses. To the best of our knowledge it has not yet been determined how this phenomenon is affected by LET. The implications for dosimetry of a number of potential scenarios are examined. Dosimeter response as a function of depth (and hence LET) was measured for four samples of the radiochromic plastic PRESAGE®, using an optical computed tomography readout and entrance doses of 2.0 Gy, 4.0 Gy, 7.8 Gy and 14.7 Gy, respectively. The dosimeter response was separated into two components, a single-exponential low-LET response and a LET-dependent quenching. For the particular formulation of PRESAGE® used, deviations from linearity of the dosimeter response became significant for doses above approximately 16 Gy. In a second experiment, three samples were each irradiated with two separate beams of 4 Gy in various different configurations. On the basis of the previous characterizations, two different models were tested for the calculation of the combined quenching effect from two contributions with different LETs. It was concluded that a linear superposition model with separate calculation of the quenching for each irradiation did not match the measured result where two beams overlapped. A second model, which used the concept of an

  5. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-09-01

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visualization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolution, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  6. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-09-08

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visual-ization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolu-tion, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  7. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  8. A validated methodology for the 3D reconstruction of cochlea geometries using human microCT images

    NASA Astrophysics Data System (ADS)

    Sakellarios, A. I.; Tachos, N. S.; Rigas, G.; Bibas, T.; Ni, G.; Böhnke, F.; Fotiadis, D. I.

    2017-05-01

    Accurate reconstruction of the inner ear is a prerequisite for the modelling and understanding of the inner ear mechanics. In this study, we present a semi-automated methodology for accurate reconstruction of the major inner ear structures (scalae, basilar membrane, stapes and semicircular canals). For this purpose, high resolution microCT images of a human specimen were used. The segmentation methodology is based on an iterative level set algorithm which provides the borders of the structures of interest. An enhanced coupled level set method which allows the simultaneous multiple image labeling without any overlapping regions has been developed for this purpose. The marching cube algorithm was applied in order to extract the surface from the segmented volume. The reconstructed geometries are then post-processed to improve the basilar membrane geometry to realistically represent physiologic dimensions. The final reconstructed model is compared to the available data from the literature. The results show that our generated inner ear structures are in good agreement with the published ones, while our approach is the most realistic in terms of the basilar membrane thickness and width reconstruction.

  9. Development of a Hausdorff distance based 3D quantification technique to evaluate the CT imaging system impact on depiction of lesion morphology

    NASA Astrophysics Data System (ADS)

    Sahbaee, Pooyan; Robins, Marthony; Solomon, Justin; Samei, Ehsan

    2016-04-01

    The purpose of this study was to develop a 3D quantification technique to assess the impact of imaging system on depiction of lesion morphology. Regional Hausdorff Distance (RHD) was computed from two 3D volumes: virtual mesh models of synthetic nodules or "virtual nodules" and CT images of physical nodules or "physical nodules". The method can be described in following steps. First, the synthetic nodule was inserted into anthropomorphic Kyoto thorax phantom and scanned in a Siemens scanner (Flash). Then, nodule was segmented from the image. Second, in order to match the orientation of the nodule, the digital models of the "virtual" and "physical" nodules were both geometrically translated to the origin. Then, the "physical" was gradually rotated at incremental 10 degrees. Third, the Hausdorff Distance was calculated from each pair of "virtual" and "physical" nodules. The minimum HD value represented the most matching pair. Finally, the 3D RHD map and the distribution of RHD were computed for the matched pair. The technique was scalarized using the FWHM of the RHD distribution. The analysis was conducted for various shapes (spherical, lobular, elliptical, and speculated) of nodules. The calculated FWHM values of RHD distribution for the 8-mm spherical, lobular, elliptical, and speculated "virtual" and "physical" nodules were 0.23, 0.42, 0.33, and 0.49, respectively.

  10. Effective incorporation of spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    PubMed

    Zheng, Guoyan

    2008-01-01

    This paper addresses the problem of estimating the 3D rigid pose of a CT volume of an object from its 2D X-ray projections. We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on X-ray and CT datasets of a plastic phantom and a cadaveric spine segment.

  11. The image variations in mastoid segment of facial nerve and sinus tympani in congenital aural atresia by HRCT and 3D VR CT.

    PubMed

    Wang, Zhen; Hou, Qian; Wang, Pu; Sun, Zhaoyong; Fan, Yue; Wang, Yun; Xue, Huadan; Jin, Zhengyu; Chen, Xiaowei

    2015-09-01

    To find the variations of middle ear structures including the spatial pattern of mastoid segment of facial nerve and the shapes of the sinus tympani in patients with congenital aural atresia (CAA) by using the high-resolution (HR) CT and 3D volume rendered (VR) CT images. HRCT was performed in 25 patients with congenital aural atresia including six bilateral atresia patients (n=25, 21 males, 4 females, mean age 13.8 years, range 6-19). Along the long axis of the posterior semicircular canal ampulla, the oblique axial multiplanar reconstruction (MPR) was set to view the depiction of the round window and the mastoid segment of facial nerve. Volumetric rending technique was used to demonstrate the morphologic features. HRCT and 3D VR findings in atresia ears were compared with those in 19 normal ears of the unilateral ears of atresia patients. On the basic plane, the horizontal line distances between the mastoid segment of the facial nerve and the round window (h-RF) in atresia ears significantly decreased compared to the control ears (P<0.05). There was a significant negative correlation between the sinus tympani area (a-ST) and the distance between the horizontal lines of FN and RW midpoint (h-RF) (P<0.05). The mean area of sinus tympani in atresia group is larger (P<0.05). The shapes of the sinus tympani were classified into three categories: the cup-shaped, the pear-shaped and the boot-shaped. Area measurement indicated that the boot-shaped sinus tympani was a special variation with a large area, which only appears in CAA group. There were a significant difference between the area of the boot-shaped group and the other two groups (P<0.05). The morphologic differences of ST and other middle ear structures can also be observed visually in 3D VR CT images. HRCT and 3D VR CT could help a better understanding of different kinds of variations in mastoid segment of facial nerve and sinus tympani in CAA ears. And it may further help surgeons to make the correct decision

  12. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    NASA Astrophysics Data System (ADS)

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2

  13. Diffusible iodine-based contrast-enhanced computed tomography (diceCT): an emerging tool for rapid, high-resolution, 3-D imaging of metazoan soft tissues.

    PubMed

    Gignac, Paul M; Kley, Nathan J; Clarke, Julia A; Colbert, Matthew W; Morhardt, Ashley C; Cerio, Donald; Cost, Ian N; Cox, Philip G; Daza, Juan D; Early, Catherine M; Echols, M Scott; Henkelman, R Mark; Herdina, A Nele; Holliday, Casey M; Li, Zhiheng; Mahlow, Kristin; Merchant, Samer; Müller, Johannes; Orsbon, Courtney P; Paluh, Daniel J; Thies, Monte L; Tsai, Henry P; Witmer, Lawrence M

    2016-06-01

    Morphologists have historically had to rely on destructive procedures to visualize the three-dimensional (3-D) anatomy of animals. More recently, however, non-destructive techniques have come to the forefront. These include X-ray computed tomography (CT), which has been used most commonly to examine the mineralized, hard-tissue anatomy of living and fossil metazoans. One relatively new and potentially transformative aspect of current CT-based research is the use of chemical agents to render visible, and differentiate between, soft-tissue structures in X-ray images. Specifically, iodine has emerged as one of the most widely used of these contrast agents among animal morphologists due to its ease of handling, cost effectiveness, and differential affinities for major types of soft tissues. The rapid adoption of iodine-based contrast agents has resulted in a proliferation of distinct specimen preparations and scanning parameter choices, as well as an increasing variety of imaging hardware and software preferences. Here we provide a critical review of the recent contributions to iodine-based, contrast-enhanced CT research to enable researchers just beginning to employ contrast enhancement to make sense of this complex new landscape of methodologies. We provide a detailed summary of recent case studies, assess factors that govern success at each step of the specimen storage, preparation, and imaging processes, and make recommendations for standardizing both techniques and reporting practices. Finally, we discuss potential cutting-edge applications of diffusible iodine-based contrast-enhanced computed tomography (diceCT) and the issues that must still be overcome to facilitate the broader adoption of diceCT going forward.

  14. A case of boomerang dysplasia with a novel causative mutation in filamin B: identification of typical imaging findings on ultrasonography and 3D-CT imaging.

    PubMed

    Tsutsumi, Seiji; Maekawa, Ayako; Obata, Miyuki; Morgan, Timothy; Robertson, Stephen P; Kurachi, Hirohisa

    2012-01-01

    Boomerang dysplasia is a rare lethal osteochondrodysplasia characterized by disorganized mineralization of the skeleton, leading to complete nonossification of some limb bones and vertebral elements, and a boomerang-like aspect to some of the long tubular bones. Like many short-limbed skeletal dysplasias with accompanying thoracic hypoplasia, the potential lethality of the phenotype can be difficult to ascertain prenatally. We report a case of boomerang dysplasia prenatally diagnosed by use of ultrasonography and 3D-CT imaging, and identified a novel mutation in the gene encoding the cytoskeletal protein filamin B (FLNB) postmortem. Findings that aided the radiological diagnosis of this condition in utero included absent ossification of two out of three long bones in each limb and elements of the vertebrae and a boomerang-like shape to the ulnae. The identified mutation is the third described for this disorder and is predicted to lead to amino acid substitution in the actin-binding domain of the filamin B molecule. Copyright © 2012 S. Karger AG, Basel.

  15. Comparison of the effect of simple and complex acquisition trajectories on the 2D SPR and 3D voxelized differences for dedicated breast CT imaging

    NASA Astrophysics Data System (ADS)

    Shah, Jainil P.; Mann, Steve D.; McKinley, Randolph L.; Tornai, Martin P.

    2014-03-01

    The 2D scatter-to-primary (SPR) ratios and 3D voxelized difference volumes were characterized for a cone beam breast CT scanner capable of arbitrary (non-traditional) 3D trajectories. The CT system uses a 30x30cm2 flat panel imager with 197 micron pixellation and a rotating tungsten anode x-ray source with 0.3mm focal spot, with an SID of 70cm. Data were acquired for two cylindrical phantoms (12.5cm and 15cm diameter) filled with three different combinations of water and methanol yielding a range of uniform densities. Projections were acquired with two acquisition trajectories: 1) simple-circular azimuthal orbit with fixed tilt; and 2) saddle orbit following a +/-15° sinusoidal trajectory around the object. Projection data were acquired in 2x2 binned mode. Projections were scatter corrected using a beam stop array method, and the 2D SPR was measured on the projections. The scatter corrected and uncorrected data were then reconstructed individually using an iterative ordered subsets convex algorithm, and the 3D difference volumes were calculated as the absolute difference between the two. Results indicate that the 2D SPR is ~7-15% higher on projections with greatest tilt for the saddle orbit, due to the longer x-ray path length through the volume, compared to the 0° tilt projections. Additionally, the 2D SPR increases with object diameter as well as density. The 3D voxelized difference volumes are an estimate of the scatter contribution to the reconstructed attenuation coefficients on a voxel level. They help visualize minor deficiencies and artifacts in the volumes due to correction methods.

  16. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    SciTech Connect

    Bai, T; Yan, H; Shi, F; Jia, X; Jiang, Steve B.; Lou, Y; Xu, Q; Mou, X

    2014-06-15

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm in a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential

  17. 3D skeletal uptake of (18)F sodium fluoride in PET/CT images is associated with overall survival in patients with prostate cancer.

    PubMed

    Lindgren Belal, Sarah; Sadik, May; Kaboteh, Reza; Hasani, Nezar; Enqvist, Olof; Svärm, Linus; Kahl, Fredrik; Simonsen, Jane; Poulsen, Mads H; Ohlsson, Mattias; Høilund-Carlsen, Poul F; Edenbrandt, Lars; Trägårdh, Elin

    2017-12-01

    Sodium fluoride (NaF) positron emission tomography combined with computer tomography (PET/CT) has shown to be more sensitive than the whole-body bone scan in the detection of skeletal uptake due to metastases in prostate cancer. We aimed to calculate a 3D index for NaF PET/CT and investigate its correlation to the bone scan index (BSI) and overall survival (OS) in a group of patients with prostate cancer. NaF PET/CT and bone scans were studied in 48 patients with prostate cancer. Automated segmentation of the thoracic and lumbar spines, sacrum, pelvis, ribs, scapulae, clavicles, and sternum were made in the CT images. Hotspots in the PET images were selected using both a manual and an automated method. The volume of each hotspot localized in the skeleton in the corresponding CT image was calculated. Two PET/CT indices, based on manual (manual PET index) and automatic segmenting using a threshold of SUV 15 (automated PET15 index), were calculated by dividing the sum of all hotspot volumes with the volume of all segmented bones. BSI values were obtained using a software for automated calculations. BSI, manual PET index, and automated PET15 index were all significantly associated with OS and concordance indices were 0.68, 0.69, and 0.70, respectively. The median BSI was 0.39 and patients with a BSI >0.39 had a significantly shorter median survival time than patients with a BSI <0.39 (2.3 years vs not reached after 5 years of follow-up [p = 0.01]). The median manual PET index was 0.53 and patients with a manual PET index >0.53 had a significantly shorter median survival time than patients with a manual PET index <0.53 (2.5 years vs not reached after 5 years of follow-up [p < 0.001]). The median automated PET15 index was 0.11 and patients with an automated PET15 index >0.11 had a significantly shorter median survival time than patients with an automated PET15 index <0.11 (2.3 years vs not reached after 5 years of follow-up [p < 0.001]). PET/CT indices

  18. An automatic approach for 3D registration of CT scans

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas

    2012-03-01

    CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.

  19. 3D mapping of water in oolithic limestone at atmospheric and vacuum saturation using X-ray micro-CT differential imaging

    SciTech Connect

    Boone, M.A.; De Kock, T.; Bultreys, T.; De Schutter, G.; Vontobel, P.; Van Hoorebeke, L.; Cnudde, V.

    2014-11-15

    Determining the distribution of fluids in porous sedimentary rocks is of great importance in many geological fields. However, this is not straightforward, especially in the case of complex sedimentary rocks like limestone, where a multidisciplinary approach is often needed to capture its broad, multimodal pore size distribution and complex pore geometries. This paper focuses on the porosity and fluid distribution in two varieties of Massangis limestone, a widely used natural building stone from the southeast part of the Paris basin (France). The Massangis limestone shows locally varying post-depositional alterations, resulting in different types of pore networks and very different water distributions within the limestone. Traditional techniques for characterizing the porosity and pore size distribution are compared with state-of-the-art neutron radiography and X-ray computed microtomography to visualize the distribution of water inside the limestone at different imbibition conditions. X-ray computed microtomography images have the great advantage to non-destructively visualize and analyze the pore space inside of a rock, but are often limited to the larger macropores in the rock due to resolution limitations. In this paper, differential imaging is successfully applied to the X-ray computed microtomography images to obtain sub-resolution information about fluid occupancy and to map the fluid distribution in three dimensions inside the scanned limestone samples. The detailed study of the pore space with differential imaging allows understanding the difference in the water uptake behavior of the limestone, a primary factor that affects the weathering of the rock. - Highlights: • The water distribution in a limestone was visualized in 3D with micro-CT. • Differential imaging allowed to map both macro and microporous zones in the rock. • The 3D study of the pore space clarified the difference in water uptake behavior. • Trapped air is visualized in the moldic

  20. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  1. Performance of adaptive iterative dose reduction 3D integrated with automatic tube current modulation in radiation dose and image noise reduction compared with filtered-back projection for 80-kVp abdominal CT: Anthropomorphic phantom and patient study.

    PubMed

    Chen, Chien-Ming; Lin, Yang-Yu; Hsu, Ming-Yi; Hung, Chien-Fu; Liao, Ying-Lan; Tsai, Hui-Yu

    2016-09-01

    Evaluate the performance of Adaptive Iterative Dose Reduction 3D (AIDR 3D) and compare with filtered-back projection (FBP) regarding radiation dosage and image quality for an 80-kVp abdominal CT. An abdominal phantom underwent four CT acquisitions and reconstruction algorithms (FBP; AIDR 3D mild, standard and strong). Sixty-three patients underwent unenhanced liver CT with FBP and standard level AIDR 3D. Further post-acquisition reconstruction with strong level AIDR 3D was made. Patients were divided into two groups (< and ≧29cm) based on the abdominal effective diameter (Deff) at T12 level. Quantitative (attenuation, noise, and signal-to-noise ratio) and qualitative (image quality, noise, sharpness, and artifact) analysis by two readers were assessed and the interobserver agreement was calculated. Strong level AIDR 3D reduced radiation dose by 72% in the phantom and 47.1% in the patient study compared with FBP. There was no difference in mean attenuations. Image noise was the lowest and signal-to-noise ratio the highest using strong level AIDR 3D in both patient groups. For Deff<29cm, image sharpness of FBP was significantly different from those of AIDR 3D (P<0.05). For Deff ≧29cm, image quality of AIDR 3D was significantly more favorable than FBP (P<0.05). Interobserver agreement was substantial. Integrated AIDR 3D allows for an automatic reduction in radiation dose and maintenance of image quality compared with FBP. Using AIDR 3D reconstruction, patients with larger abdomen circumference could be imaged at 80kVp. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. 3D inpatient dose reconstruction from the PET-CT imaging of {sup 90}Y microspheres for metastatic cancer to the liver: Feasibility study

    SciTech Connect

    Fourkal, E.; Veltchev, I.; Lin, M.; Meyer, J.; Koren, S.; Doss, M.; Yu, J. Q.

    2013-08-15

    Purpose: The introduction of radioembolization with microspheres represents a significant step forward in the treatment of patients with metastatic disease to the liver. This technique uses semiempirical formulae based on body surface area or liver and target volumes to calculate the required total activity for a given patient. However, this treatment modality lacks extremely important information, which is the three-dimensional (3D) dose delivered by microspheres to different organs after their administration. The absence of this information dramatically limits the clinical efficacy of this modality, specifically the predictive power of the treatment. Therefore, the aim of this study is to develop a 3D dose calculation technique that is based on the PET imaging of the infused microspheres.Methods: The Fluka Monte Carlo code was used to calculate the voxel dose kernel for {sup 90}Y source with voxel size equal to that of the PET scan. The measured PET activity distribution was converted to total activity distribution for the subsequent convolution with the voxel dose kernel to obtain the 3D dose distribution. In addition, dose-volume histograms were generated to analyze the dose to the tumor and critical structures.Results: The 3D inpatient dose distribution can be reconstructed from the PET data of a patient scanned after the infusion of microspheres. A total of seven patients have been analyzed so far using the proposed reconstruction method. Four patients underwent treatment with SIR-Spheres for liver metastases from colorectal cancer and three patients were treated with Therasphere for hepatocellular cancer. A total of 14 target tumors were contoured on post-treatment PET-CT scans for dosimetric evaluation. Mean prescription activity was 1.7 GBq (range: 0.58–3.8 GBq). The resulting mean maximum measured dose to targets was 167 Gy (range: 71–311 Gy). Mean minimum dose to 70% of target (D70) was 68 Gy (range: 25–155 Gy). Mean minimum dose to 90% of target

  3. 3D inpatient dose reconstruction from the PET-CT imaging of 90Y microspheres for metastatic cancer to the liver: feasibility study.

    PubMed

    Fourkal, E; Veltchev, I; Lin, M; Koren, S; Meyer, J; Doss, M; Yu, J Q

    2013-08-01

    The introduction of radioembolization with microspheres represents a significant step forward in the treatment of patients with metastatic disease to the liver. This technique uses semiempirical formulae based on body surface area or liver and target volumes to calculate the required total activity for a given patient. However, this treatment modality lacks extremely important information, which is the three-dimensional (3D) dose delivered by microspheres to different organs after their administration. The absence of this information dramatically limits the clinical efficacy of this modality, specifically the predictive power of the treatment. Therefore, the aim of this study is to develop a 3D dose calculation technique that is based on the PET imaging of the infused microspheres. The Fluka Monte Carlo code was used to calculate the voxel dose kernel for 90Y source with voxel size equal to that of the PET scan. The measured PET activity distribution was converted to total activity distribution for the subsequent convolution with the voxel dose kernel to obtain the 3D dose distribution. In addition, dose-volume histograms were generated to analyze the dose to the tumor and critical structures. The 3D inpatient dose distribution can be reconstructed from the PET data of a patient scanned after the infusion of microspheres. A total of seven patients have been analyzed so far using the proposed reconstruction method. Four patients underwent treatment with SIR-Spheres for liver metastases from colorectal cancer and three patients were treated with Therasphere for hepatocellular cancer. A total of 14 target tumors were contoured on post-treatment PET-CT scans for dosimetric evaluation. Mean prescription activity was 1.7 GBq (range: 0.58-3.8 GBq). The resulting mean maximum measured dose to targets was 167 Gy (range: 71-311 Gy). Mean minimum dose to 70% of target (D70) was 68 Gy (range: 25-155 Gy). Mean minimum dose to 90% of target (D90) was 53 Gy (range: 13-125 Gy). A

  4. A Semi-Automatic Method to Extract Canal Pathways in 3D Micro-CT Images of Octocorals

    PubMed Central

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve – if possible – technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or “turned” into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer's effort and

  5. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    PubMed

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer

  6. Significance of functional hepatic resection rate calculated using 3D CT/99mTc-galactosyl human serum albumin single-photon emission computed tomography fusion imaging

    PubMed Central

    Tsuruga, Yosuke; Kamiyama, Toshiya; Kamachi, Hirofumi; Shimada, Shingo; Wakayama, Kenji; Orimo, Tatsuya; Kakisaka, Tatsuhiko; Yokoo, Hideki; Taketomi, Akinobu

    2016-01-01

    AIM: To evaluate the usefulness of the functional hepatic resection rate (FHRR) calculated using 3D computed tomography (CT)/99mTc-galactosyl-human serum albumin (GSA) single-photon emission computed tomography (SPECT) fusion imaging for surgical decision making. METHODS: We enrolled 57 patients who underwent bi- or trisectionectomy at our institution between October 2013 and March 2015. Of these, 26 patients presented with hepatocellular carcinoma, 12 with hilar cholangiocarcinoma, six with intrahepatic cholangiocarcinoma, four with liver metastasis, and nine with other diseases. All patients preoperatively underwent three-phase dynamic multidetector CT and 99mTc-GSA scintigraphy. We compared the parenchymal hepatic resection rate (PHRR) with the FHRR, which was defined as the resection volume counts per total liver volume counts on 3D CT/99mTc-GSA SPECT fusion images. RESULTS: In total, 50 patients underwent bisectionectomy and seven underwent trisectionectomy. Biliary reconstruction was performed in 15 patients, including hepatopancreatoduodenectomy in two. FHRR and PHRR were 38.6 ± 19.9 and 44.5 ± 16.0, respectively; FHRR was strongly correlated with PHRR. The regression coefficient for FHRR on PHRR was 1.16 (P < 0.0001). The ratio of FHRR to PHRR for patients with preoperative therapies (transcatheter arterial chemoembolization, radiation, radiofrequency ablation, etc.), large tumors with a volume of > 1000 mL, and/or macroscopic vascular invasion was significantly smaller than that for patients without these factors (0.73 ± 0.19 vs 0.82 ± 0.18, P < 0.05). Postoperative hyperbilirubinemia was observed in six patients. Major morbidities (Clavien-Dindo grade ≥ 3) occurred in 17 patients (29.8%). There was no case of surgery-related death. CONCLUSION: Our results suggest that FHRR is an important deciding factor for major hepatectomy, because FHRR and PHRR may be discrepant owing to insufficient hepatic inflow and congestion in patients with preoperative

  7. Significance of functional hepatic resection rate calculated using 3D CT/(99m)Tc-galactosyl human serum albumin single-photon emission computed tomography fusion imaging.

    PubMed

    Tsuruga, Yosuke; Kamiyama, Toshiya; Kamachi, Hirofumi; Shimada, Shingo; Wakayama, Kenji; Orimo, Tatsuya; Kakisaka, Tatsuhiko; Yokoo, Hideki; Taketomi, Akinobu

    2016-05-07

    To evaluate the usefulness of the functional hepatic resection rate (FHRR) calculated using 3D computed tomography (CT)/(99m)Tc-galactosyl-human serum albumin (GSA) single-photon emission computed tomography (SPECT) fusion imaging for surgical decision making. We enrolled 57 patients who underwent bi- or trisectionectomy at our institution between October 2013 and March 2015. Of these, 26 patients presented with hepatocellular carcinoma, 12 with hilar cholangiocarcinoma, six with intrahepatic cholangiocarcinoma, four with liver metastasis, and nine with other diseases. All patients preoperatively underwent three-phase dynamic multidetector CT and (99m)Tc-GSA scintigraphy. We compared the parenchymal hepatic resection rate (PHRR) with the FHRR, which was defined as the resection volume counts per total liver volume counts on 3D CT/(99m)Tc-GSA SPECT fusion images. In total, 50 patients underwent bisectionectomy and seven underwent trisectionectomy. Biliary reconstruction was performed in 15 patients, including hepatopancreatoduodenectomy in two. FHRR and PHRR were 38.6 ± 19.9 and 44.5 ± 16.0, respectively; FHRR was strongly correlated with PHRR. The regression coefficient for FHRR on PHRR was 1.16 (P < 0.0001). The ratio of FHRR to PHRR for patients with preoperative therapies (transcatheter arterial chemoembolization, radiation, radiofrequency ablation, etc.), large tumors with a volume of > 1000 mL, and/or macroscopic vascular invasion was significantly smaller than that for patients without these factors (0.73 ± 0.19 vs 0.82 ± 0.18, P < 0.05). Postoperative hyperbilirubinemia was observed in six patients. Major morbidities (Clavien-Dindo grade ≥ 3) occurred in 17 patients (29.8%). There was no case of surgery-related death. Our results suggest that FHRR is an important deciding factor for major hepatectomy, because FHRR and PHRR may be discrepant owing to insufficient hepatic inflow and congestion in patients with preoperative therapies, macroscopic vascular

  8. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  9. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  10. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  11. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  12. WE-AB-204-03: A Novel 3D Printed Phantom for 4D PET/CT Imaging and SIB Radiotherapy Verification

    SciTech Connect

    Soultan, D; Murphy, J; Moiseenko, V; Cervino, L; Gill, B

    2015-06-15

    Purpose: To construct and test a 3D printed phantom designed to mimic variable PET tracer uptake seen in lung tumor volumes. To assess segmentation accuracy of sub-volumes of the phantom following 4D PET/CT scanning with ideal and patient-specific respiratory motion. To plan, deliver and verify delivery of PET-driven, gated, simultaneous integrated boost (SIB) radiotherapy plans. Methods: A set of phantoms and inserts were designed and manufactured for a realistic representation of lung cancer gated radiotherapy steps from 4D PET/CT scanning to dose delivery. A cylindrical phantom (40x 120 mm) holds inserts for PET/CT scanning. The novel 3D printed insert dedicated to 4D PET/CT mimics high PET tracer uptake in the core and lower uptake in the periphery. This insert is a variable density porous cylinder (22.12×70 mm), ABS-P430 thermoplastic, 3D printed by uPrint SE Plus with inner void volume (5.5×42 mm). The square pores (1.8×1.8 mm2 each) fill 50% of outer volume, resulting in a 2:1 SUV ratio of PET-tracer in the void volume with respect to porous volume. A matching in size cylindrical phantom is dedicated to validate gated radiotherapy. It contains eight peripheral holes matching the location of the porous part of the 3D printed insert, and one central hole. These holes accommodate adaptors for Farmer-type ion chamber and cells vials. Results: End-to-end test were performed from 4D PET/CT scanning to transferring data to the planning system and target volume delineation. 4D PET/CT scans were acquired of the phantom with different respiratory motion patterns and gating windows. A measured 2:1 18F-FDG SUV ratio between inner void and outer volume matched the 3D printed design. Conclusion: The novel 3D printed phantom mimics variable PET tracer uptake typical of tumors. Obtained 4D PET/CT scans are suitable for segmentation, treatment planning and delivery in SIB gated treatments of NSCLC.

  13. Effective incorporating spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    PubMed

    Zheng, Guoyan

    2010-10-01

    This paper addresses the problem of estimating the 3D rigid poses of a CT volume of an object from its 2D X-ray projection(s). We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measures only take intensity values into account without considering spatial information and their robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experiments were conducted on datasets from two applications: (a) intra-operative patient pose estimation from a limited number (e.g. 2) of calibrated fluoroscopic images, and (b) post-operative cup orientation estimation from a single standard X-ray radiograph with/without gonadal shielding. The experiment on intra-operative patient pose estimation showed a mean target registration accuracy of 0.8mm and a capture range of 11.5mm, while the experiment on estimating the post-operative cup orientation from a single X-ray radiograph showed a mean accuracy below 2 degrees for both anteversion and inclination. More importantly, results from both experiments demonstrated that the newly derived similarity measures were robust to occlusions in the X-ray image(s).

  14. From medical imaging data to 3D printed anatomical models.

    PubMed

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  15. From medical imaging data to 3D printed anatomical models

    PubMed Central

    Hill, Emma R.; Robertson, James L.; Maneas, Efthymios; Plumb, Andrew A.; Nikitichev, Daniil I.

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer. PMID:28562693

  16. A new 3-D diagnosis strategy for duodenal malignant lesions using multidetector row CT, CT virtual duodenoscopy, duodenography, and 3-D multicholangiography.

    PubMed

    Sata, N; Endo, K; Shimura, K; Koizumi, M; Nagai, H

    2007-01-01

    Recent advances in multidetector row computed tomography (MD-CT) technology provide new opportunities for clinical diagnoses of various diseases. Here we assessed CT virtual duodenoscopy, duodenography, and three-dimensional (3D) multicholangiography created by MD-CT for clinical diagnosis of duodenal malignant lesions. The study involved seven cases of periduodenal carcinoma (four ampullary carcinomas, two duodenal carcinomas, one pancreatic carcinoma). Biliary contrast medium was administered intravenously, followed by intravenous administration of an anticholinergic agent and oral administration of effervescent granules for expanding the upper gastrointestinal tract. Following intravenous administration of a nonionic contrast medium, an upper abdominal MD-CT scan was performed in the left lateral position. Scan data were processed on a workstation to create CT virtual duodenoscopy, duodenography, 3D multicholangiography, and various postprocessing images, which were then evaluated for their effectiveness as preoperative diagnostic tools. Carcinoma location and extent were clearly demonstrated as defects or colored low-density areas in 3-D multicholangiography images and as protruding lesions in virtual duodenography and duodenoscopy images. These findings were confirmed using multiplanar or curved planar reformation images. In conclusion, CT virtual duodenoscopy, doudenography, 3-D multicholangiography, and various images created by MD-CT alone provided necessary and adequate preoperative diagnostic information.

  17. Self-Calibration of Cone-Beam CT Geometry Using 3D-2D Image Registration: Development and Application to Task-Based Imaging with a Robotic C-Arm

    PubMed Central

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-01-01

    Purpose Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting “self-calibration” was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard (“true”) calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the “self” and “true” calibration methods were on the order of 10−3 mm−1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion The proposed geometric “self” calibration provides a means for 3D imaging on general non-circular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced “task-based” 3D imaging methods now in development for robotic C-arms. PMID:26388661

  18. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  19. Computer Assisted Cancer Device - 3D Imaging

    DTIC Science & Technology

    2006-10-01

    tomosynthesis images of the breast. iCAD has identified several sources of 3D tomosynthesis data, and has begun adapting its image analysis...collaborative relationships with major manufacturers of tomosynthesis equipment. 21. iCAD believes that tomosynthesis , a 3D breast imaging technique...purported advantages of tomosynthesis relative to conventional mammography include; improved lesion visibility, improved lesion detectability and

  20. Evaluation of 1D, 2D and 3D nodule size estimation by radiologists for spherical and non-spherical nodules through CT thoracic phantom imaging

    NASA Astrophysics Data System (ADS)

    Petrick, Nicholas; Kim, Hyun J. Grace; Clunie, David; Borradaile, Kristin; Ford, Robert; Zeng, Rongping; Gavrielides, Marios A.; McNitt-Gray, Michael F.; Fenimore, Charles; Lu, Z. Q. John; Zhao, Binsheng; Buckler, Andrew J.

    2011-03-01

    The purpose of this work was to estimate bias in measuring the size of spherical and non-spherical lesions by radiologists using three sizing techniques under a variety of simulated lesion and reconstruction slice thickness conditions. We designed a reader study in which six radiologists estimated the size of 10 synthetic nodules of various sizes, shapes and densities embedded within a realistic anthropomorphic thorax phantom from CT scan data. In this manuscript we report preliminary results for the first four readers (Reader 1-4). Two repeat CT scans of the phantom containing each nodule were acquired using a Philips 16-slice scanner at a 0.8 and 5 mm slice thickness. The readers measured the sizes of all nodules for each of the 40 resulting scans (10 nodules x 2 slice thickness x 2 repeat scans) using three sizing techniques (1D longest in-slice dimension; 2D area from longest in-slice dimension and corresponding longest perpendicular dimension; 3D semi-automated volume) in each of 2 reading sessions. The normalized size was estimated for each sizing method and an inter-comparison of bias among methods was performed. The overall relative biases (standard deviation) of the 1D, 2D and 3D methods for the four readers subset (Readers 1-4) were -13.4 (20.3), -15.3 (28.4) and 4.8 (21.2) percentage points, respectively. The relative biases for the 3D volume sizing method was statistically lower than either the 1D or 2D method (p<0.001 for 1D vs. 3D and 2D vs. 3D).

  1. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  2. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  3. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  4. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    PubMed

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-07

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A(c)(,out).

  5. 3D Ultrasound Can Contribute to Planning CT to Define the Target for Partial Breast Radiotherapy

    SciTech Connect

    Berrang, Tanya S.; Truong, Pauline T. Popescu, Carmen; Drever, Laura; Kader, Hosam A.; Hilts, Michelle L.; Mitchell, Tracy; Soh, S.Y.; Sands, Letricia; Silver, Stuart; Olivotto, Ivo A.

    2009-02-01

    Purpose: The role of three-dimensional breast ultrasound (3D US) in planning partial breast radiotherapy (PBRT) is unknown. This study evaluated the accuracy of coregistration of 3D US to planning computerized tomography (CT) images, the seroma contouring consistency of radiation oncologists using the two imaging modalities and the clinical situations in which US was associated with improved contouring consistency compared to CT. Materials and Methods: Twenty consecutive women with early-stage breast cancer were enrolled prospectively after breast-conserving surgery. Subjects underwent 3D US at CT simulation for adjuvant RT. Three radiation oncologists independently contoured the seroma on separate CT and 3D US image sets. Seroma clarity, seroma volumes, and interobserver contouring consistency were compared between the imaging modalities. Associations between clinical characteristics and seroma clarity were examined using Pearson correlation statistics. Results: 3D US and CT coregistration was accurate to within 2 mm or less in 19/20 (95%) cases. CT seroma clarity was reduced with dense breast parenchyma (p = 0.035), small seroma volume (p < 0.001), and small volume of excised breast tissue (p = 0.01). US seroma clarity was not affected by these factors (p = NS). US was associated with improved interobserver consistency compared with CT in 8/20 (40%) cases. Of these 8 cases, 7 had low CT seroma clarity scores and 4 had heterogeneously to extremely dense breast parenchyma. Conclusion: 3D US can be a useful adjunct to CT in planning PBRT. Radiation oncologists were able to use US images to contour the seroma target, with improved interobserver consistency compared with CT in cases with dense breast parenchyma and poor CT seroma clarity.

  6. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  7. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  8. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    SciTech Connect

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da

    2013-05-06

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  9. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  10. Fast 3D multiple fan-beam CT systems

    NASA Astrophysics Data System (ADS)

    Kohlbrenner, Adrian; Haemmerle, Stefan; Laib, Andres; Koller, Bruno; Ruegsegger, Peter

    1999-09-01

    Two fast, CCD-based three-dimensional CT scanners for in vivo applications have been developed. One is designed for small laboratory animals and has a voxel size of 20 micrometer, while the other, having a voxel size of 80 micrometer, is used for human examinations. Both instruments make use of a novel multiple fan-beam technique: radiation from a line-focus X-ray tube is divided into a stack of fan-beams by a 28 micrometer pitch foil collimator. The resulting wedge-shaped X-ray field is the key to the instrument's high scanning speed and allows to position the sample close to the X-ray source, which makes it possible to build compact CT systems. In contrast to cone- beam scanners, the multiple fan-beam scanner relies on standard fan-beam algorithms, thereby eliminating inaccuracies in the reconstruction process. The projections from one single rotation are acquired within 2 min and are subsequently reconstructed into a 1024 X 1024 X 255 voxel array. Hence a single rotation about the sample delivers a 3D image containing a quarter of a billion voxels. Such volumetric images are 6.6 mm in height and can be stacked on top of each other. An area CCD sensor bonded to a fiber-optic light guide acts as a detector. Since no image intensifier, conventional optics or tapers are used throughout the system, the image is virtually distortion free. The scanner's high scanning speed and high resolution at moderately low radiation dose are the basis for reliable time serial measurements and analyses.

  11. SU-C-BRB-06: Utilizing 3D Scanner and Printer for Dummy Eye-Shield: Artifact-Free CT Images of Tungsten Eye-Shield for Accurate Dose Calculation

    SciTech Connect

    Park, J; Lee, J; Kim, H; Kim, I; Ye, S

    2015-06-15

    Purpose: To evaluate the effect of a tungsten eye-shield on the dose distribution of a patient. Methods: A 3D scanner was used to extract the dimension and shape of a tungsten eye-shield in the STL format. Scanned data was transferred into a 3D printer. A dummy eye shield was then produced using bio-resin (3D systems, VisiJet M3 Proplast). For a patient with mucinous carcinoma, the planning CT was obtained with the dummy eye-shield placed on the patient’s right eye. Field shaping of 6 MeV was performed using a patient-specific cerrobend block on the 15 x 15 cm{sup 2} applicator. The gantry angle was 330° to cover the planning target volume near by the lens. EGS4/BEAMnrc was commissioned from our measurement data from a Varian 21EX. For the CT-based dose calculation using EGS4/DOSXYZnrc, the CT images were converted to a phantom file through the ctcreate program. The phantom file had the same resolution as the planning CT images. By assigning the CT numbers of the dummy eye-shield region to 17000, the real dose distributions below the tungsten eye-shield were calculated in EGS4/DOSXYZnrc. In the TPS, the CT number of the dummy eye-shield region was assigned to the maximum allowable CT number (3000). Results: As compared to the maximum dose, the MC dose on the right lens or below the eye shield area was less than 2%, while the corresponding RTP calculated dose was an unrealistic value of approximately 50%. Conclusion: Utilizing a 3D scanner and a 3D printer, a dummy eye-shield for electron treatment can be easily produced. The artifact-free CT images were successfully incorporated into the CT-based Monte Carlo simulations. The developed method was useful in predicting the realistic dose distributions around the lens blocked with the tungsten shield.

  12. Ultrafast 3D imaging by holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro

    2017-02-01

    As an ultrafast 3D imaging technique, an improved light-in-flight recording by holography using a femtosecond is presented. To record 3D image of light propagation, a voluminous light-scattering medium is introduced to the light-inflight recording by holography. A mode-locked Ti:Sapphire laser are employed for the optical source. To generate the 3D image of propagating light, a voluminous light-scattering medium is made of gelatin jelly and set in the optical path of the object wave of holography. 3D motion picture of propagation of a femtosecond light pulse was achieved for 260ps with 220fs temporal resolution. Digital recording of 3D image of light propagation is also presented. To record the 3D image of the light propagation, digital holography is combined with the light-in-flight recording by holography using a voluminous light-scattering medium. The hologram is recorded with an image sensor such as CCD image sensor. The image of the light is reconstructed from the digitally recorded hologram by computer. To obtain the motion picture of the 3D image of the light propagation, a set of pieces of holograms consisting of 512 × 512 pixels are extracted from the whole area of the digitally recorded hologram. The position of the extracted piece on the recoded hologram is shifted along the direction in which the reference optical pulse swept on the image sensor, piece-by-piece of the hologram. The set of the pieces are reconstructed sequentially, then the 3D digital motion picture of propagation of femtosecond light pulse is achieved. The recordable time of the motion picture was 60 ps.

  13. Three-Dimensional Mapping of Soil Chemical Characteristics at Micrometric Scale by Combining 2D SEM-EDX Data and 3D X-Ray CT Images

    PubMed Central

    Hapca, Simona; Baveye, Philippe C.; Wilson, Clare; Lark, Richard Murray; Otten, Wilfred

    2015-01-01

    There is currently a significant need to improve our understanding of the factors that control a number of critical soil processes by integrating physical, chemical and biological measurements on soils at microscopic scales to help produce 3D maps of the related properties. Because of technological limitations, most chemical and biological measurements can be carried out only on exposed soil surfaces or 2-dimensional cuts through soil samples. Methods need to be developed to produce 3D maps of soil properties based on spatial sequences of 2D maps. In this general context, the objective of the research described here was to develop a method to generate 3D maps of soil chemical properties at the microscale by combining 2D SEM-EDX data with 3D X-ray computed tomography images. A statistical approach using the regression tree method and ordinary kriging applied to the residuals was developed and applied to predict the 3D spatial distribution of carbon, silicon, iron, and oxygen at the microscale. The spatial correlation between the X-ray grayscale intensities and the chemical maps made it possible to use a regression-tree model as an initial step to predict the 3D chemical composition. For chemical elements, e.g., iron, that are sparsely distributed in a soil sample, the regression-tree model provides a good prediction, explaining as much as 90% of the variability in some of the data. However, for chemical elements that are more homogenously distributed, such as carbon, silicon, or oxygen, the additional kriging of the regression tree residuals improved significantly the prediction with an increase in the R2 value from 0.221 to 0.324 for carbon, 0.312 to 0.423 for silicon, and 0.218 to 0.374 for oxygen, respectively. The present research develops for the first time an integrated experimental and theoretical framework, which combines geostatistical methods with imaging techniques to unveil the 3-D chemical structure of soil at very fine scales. The methodology presented

  14. Three-Dimensional Mapping of Soil Chemical Characteristics at Micrometric Scale by Combining 2D SEM-EDX Data and 3D X-Ray CT Images.

    PubMed

    Hapca, Simona; Baveye, Philippe C; Wilson, Clare; Lark, Richard Murray; Otten, Wilfred

    2015-01-01

    There is currently a significant need to improve our understanding of the factors that control a number of critical soil processes by integrating physical, chemical and biological measurements on soils at microscopic scales to help produce 3D maps of the related properties. Because of technological limitations, most chemical and biological measurements can be carried out only on exposed soil surfaces or 2-dimensional cuts through soil samples. Methods need to be developed to produce 3D maps of soil properties based on spatial sequences of 2D maps. In this general context, the objective of the research described here was to develop a method to generate 3D maps of soil chemical properties at the microscale by combining 2D SEM-EDX data with 3D X-ray computed tomography images. A statistical approach using the regression tree method and ordinary kriging applied to the residuals was developed and applied to predict the 3D spatial distribution of carbon, silicon, iron, and oxygen at the microscale. The spatial correlation between the X-ray grayscale intensities and the chemical maps made it possible to use a regression-tree model as an initial step to predict the 3D chemical composition. For chemical elements, e.g., iron, that are sparsely distributed in a soil sample, the regression-tree model provides a good prediction, explaining as much as 90% of the variability in some of the data. However, for chemical elements that are more homogenously distributed, such as carbon, silicon, or oxygen, the additional kriging of the regression tree residuals improved significantly the prediction with an increase in the R2 value from 0.221 to 0.324 for carbon, 0.312 to 0.423 for silicon, and 0.218 to 0.374 for oxygen, respectively. The present research develops for the first time an integrated experimental and theoretical framework, which combines geostatistical methods with imaging techniques to unveil the 3-D chemical structure of soil at very fine scales. The methodology presented

  15. A 3D-printed anatomical pancreas and kidney phantom for optimizing SPECT/CT reconstruction settings in beta cell imaging using (111)In-exendin.

    PubMed

    Woliner-van der Weg, Wietske; Deden, Laura N; Meeuwis, Antoi P W; Koenrades, Maaike; Peeters, Laura H C; Kuipers, Henny; Laanstra, Geert Jan; Gotthardt, Martin; Slump, Cornelis H; Visser, Eric P

    2016-12-01

    Quantitative single photon emission computed tomography (SPECT) is challenging, especially for pancreatic beta cell imaging with (111)In-exendin due to high uptake in the kidneys versus much lower uptake in the nearby pancreas. Therefore, we designed a three-dimensionally (3D) printed phantom representing the pancreas and kidneys to mimic the human situation in beta cell imaging. The phantom was used to assess the effect of different reconstruction settings on the quantification of the pancreas uptake for two different, commercially available software packages. 3D-printed, hollow pancreas and kidney compartments were inserted into the National Electrical Manufacturers Association (NEMA) NU2 image quality phantom casing. These organs and the background compartment were filled with activities simulating relatively high and low pancreatic (111)In-exendin uptake for, respectively, healthy humans and type 1 diabetes patients. Images were reconstructed using Siemens Flash 3D and Hermes Hybrid Recon, with varying numbers of iterations and subsets and corrections. Images were visually assessed on homogeneity and artefacts, and quantitatively by the pancreas-to-kidney activity concentration ratio. Phantom images were similar to clinical images and showed comparable artefacts. All corrections were required to clearly visualize the pancreas. Increased numbers of subsets and iterations improved the quantitative performance but decreased homogeneity both in the pancreas and the background. Based on the phantom analyses, the Hybrid Recon reconstruction with 6 iterations and 16 subsets was found to be most suitable for clinical use. This work strongly contributed to quantification of pancreatic (111)In-exendin uptake. It showed how clinical images of (111)In-exendin can be interpreted and enabled selection of the most appropriate protocol for clinical use.

  16. Comparison of physical quality assurance between Scanora 3D and 3D Accuitomo 80 dental CT scanners.

    PubMed

    Ali, Ahmed S; Fteita, Dareen; Kulmala, Jarmo

    2015-01-01

    The use of cone beam computed tomography (CBCT) in dentistry has proven to be useful in the diagnosis and treatment planning of several oral and maxillofacial diseases. The quality of the resulting image is dictated by many factors related to the patient, unit, and operator. In this work, two dental CBCT units, namely Scanora 3D and 3D Accuitomo 80, were assessed and compared in terms of quantitative effective dose delivered to specific locations in a dosimetry phantom. Resolution and contrast were evaluated in only 3D Accuitomo 80 using special quality assurance phantoms. Scanora 3D, with less radiation time, showed less dosing values compared to 3D Accuitomo 80 (mean 0.33 mSv, SD±0.16 vs. 0.18 mSv, SD±0.1). Using paired t-test, no significant difference was found in Accuitomo two scan sessions (p>0.05), while it was highly significant in Scanora (p>0.05). The modulation transfer function value (at 2 lp/mm), in both measurements, was found to be 4.4%. The contrast assessment of 3D Accuitomo 80 in the two measurements showed few differences, for example, the grayscale values were the same (SD=0) while the noise level was slightly different (SD=0 and 0.67, respectively). The radiation dose values in these two CBCT units are significantly less than those encountered in systemic CT scans. However, the dose seems to be affected more by changing the field of view rather than the voltage or amperage. The low doses were at the expense of the image quality produced, which was still acceptable. Although the spatial resolution and contrast were inferior to the medical images produced in systemic CT units, the present results recommend adopting CBCTs in maxillofacial imaging because of low radiation dose and adequate image quality.

  17. Comparison of physical quality assurance between Scanora 3D and 3D Accuitomo 80 dental CT scanners

    PubMed Central

    Ali, Ahmed S.; Fteita, Dareen; Kulmala, Jarmo

    2015-01-01

    Background The use of cone beam computed tomography (CBCT) in dentistry has proven to be useful in the diagnosis and treatment planning of several oral and maxillofacial diseases. The quality of the resulting image is dictated by many factors related to the patient, unit, and operator. Materials and methods In this work, two dental CBCT units, namely Scanora 3D and 3D Accuitomo 80, were assessed and compared in terms of quantitative effective dose delivered to specific locations in a dosimetry phantom. Resolution and contrast were evaluated in only 3D Accuitomo 80 using special quality assurance phantoms. Results Scanora 3D, with less radiation time, showed less dosing values compared to 3D Accuitomo 80 (mean 0.33 mSv, SD±0.16 vs. 0.18 mSv, SD±0.1). Using paired t-test, no significant difference was found in Accuitomo two scan sessions (p>0.05), while it was highly significant in Scanora (p>0.05). The modulation transfer function value (at 2 lp/mm), in both measurements, was found to be 4.4%. The contrast assessment of 3D Accuitomo 80 in the two measurements showed few differences, for example, the grayscale values were the same (SD=0) while the noise level was slightly different (SD=0 and 0.67, respectively). Conclusions The radiation dose values in these two CBCT units are significantly less than those encountered in systemic CT scans. However, the dose seems to be affected more by changing the field of view rather than the voltage or amperage. The low doses were at the expense of the image quality produced, which was still acceptable. Although the spatial resolution and contrast were inferior to the medical images produced in systemic CT units, the present results recommend adopting CBCTs in maxillofacial imaging because of low radiation dose and adequate image quality. PMID:26091832

  18. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  19. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  20. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  1. 3-D printouts of the tracheobronchial tree generated from CT images as an aid to management in a case of tracheobronchial chondromalacia caused by relapsing polychondritis.

    PubMed

    Tam, Matthew David; Laycock, Stephen David; Jayne, David; Babar, Judith; Noble, Brendon

    2013-08-01

    This report concerns a 67 year old male patient with known advanced relapsing polychondritis complicated by tracheobronchial chondromalacia who is increasingly symptomatic and therapeutic options such as tracheostomy and stenting procedures are being considered. The DICOM files from the patient's dynamic chest CT in its inspiratory and expiratory phases were used to generate stereolithography (STL) files and hence print out 3-D models of the patient's trachea and central airways. The 4 full-sized models allowed better understanding of the extent and location of any stenosis or malacic change and should aid any planned future stenting procedures. The future possibility of using the models as scaffolding to generate a new cartilaginous upper airway using regenerative medical techniques is also discussed.

  2. 3-D printouts of the tracheobronchial tree generated from CT images as an aid to management in a case of tracheobronchial chondromalacia caused by relapsing polychondritis

    PubMed Central

    Tam, Matthew David; Laycock, Stephen David; Jayne, David; Babar, Judith; Noble, Brendon

    2013-01-01

    This report concerns a 67 year old male patient with known advanced relapsing polychondritis complicated by tracheobronchial chondromalacia who is increasingly symptomatic and therapeutic options such as tracheostomy and stenting procedures are being considered. The DICOM files from the patient’s dynamic chest CT in its inspiratory and expiratory phases were used to generate stereolithography (STL) files and hence print out 3-D models of the patient’s trachea and central airways. The 4 full-sized models allowed better understanding of the extent and location of any stenosis or malacic change and should aid any planned future stenting procedures. The future possibility of using the models as scaffolding to generate a new cartilaginous upper airway using regenerative medical techniques is also discussed. PMID:24421951

  3. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  4. Development of 3D CAD/FEM Analysis System for Natural Teeth and Jaw Bone Constructed from X-Ray CT Images

    PubMed Central

    Hasegawa, Aki; Shinya, Akikazu; Nakasone, Yuji; Lassila, Lippo V. J.; Vallittu, Pekka K.; Shinya, Akiyoshi

    2010-01-01

    A three-dimensional finite element model of the lower first premolar, with the three layers of enamel, dentin, and pulp, and the mandible, with the two layers of cortical and cancellous bones, was directly constructed from noninvasively acquired CT images. This model was used to develop a system to analyze the stresses on the teeth and supporting bone structure during occlusion based on the finite element method and to examine the possibility of mechanical simulation. PMID:20706535

  5. A dataset of fishes in and around Inle Lake, an ancient lake of Myanmar, with DNA barcoding, photo images and CT/3D models

    PubMed Central

    Kano, Yuichi; Musikasinthorn, Prachya; Iwata, Akihisa; Tun, Sein; Yun, LKC; Win, Seint Seint; Matsui, Shoko; Tabata, Ryoichi; Yamasaki, Takeshi

    2016-01-01

    Abstract Background Inle (Inlay) Lake, an ancient lake of Southeast Asia, is located at the eastern part of Myanmar, surrounded by the Shan Mountains. Detailed information on fish fauna in and around the lake has long been unknown, although its outstanding endemism was reported a century ago. New information Based on the fish specimens collected from markets, rivers, swamps, ponds and ditches around Inle Lake as well as from the lake itself from 2014 to 2016, we recorded a total of 948 occurrence data (2120 individuals), belonging to 10 orders, 19 families, 39 genera and 49 species. Amongst them, 13 species of 12 genera are endemic or nearly endemic to the lake system and 17 species of 16 genera are suggested as non-native. The data are all accessible from the document “A dataset of Inle Lake fish fauna and its distribution (http://ipt.pensoft.net/resource.do?r=inle_fish_2014-16)”, as well as DNA barcoding data (mitochondrial COI) for all species being available from the DDBJ/EMBL/GenBank (Accession numbers: LC189568–LC190411). Live photographs of almost all the individuals and CT/3D model data of several specimens are also available at the graphical fish biodiversity database (http://ffish.asia/INLE2016; http://ffish.asia/INLE2016-3D). The information can benefit the clarification, public concern and conservation of the fish biodiversity in the region. PMID:27932926

  6. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  7. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  8. [Spiral computerized tomography with tridimensional reconstruction (spiral 3D CT) in the study of maxillofacial pathology].

    PubMed

    Mevio, E; Calabrò, P; Preda, L; Di Maggio, E M; Caprotti, A

    1995-12-01

    Three dimensional computer reconstruction of CT scans provide head and neck surgeons with an exciting interactive display of clinical anatomy. The 3D CT reconstruction of complex maxillo facial anatomic parts permits a more specific preoperative analysis and surgical planning. Its delineation of disease extension aids the surgeon in developing his own mental three-dimensional image of the regional morphology. Three-dimensional CT permits a clearer perception of the extent of fracture comminution and resulting displacement of fragments. In the case of maxillo-facial tumors, 3D images provide a very clear picture of the extent of erosion involving the adjacent critical organs. Three-dimensional imaging in first generation 3D scanners did have some limitations such as long reconstruction times and inadequate resolution. Subsequent generations, in particular the spiral 3D CT, have eliminated these drawbacks. Furthermore, costs are comparable with those of other computer reconstruction technology that might provide similar images. Representative cases demonstrating the use of 3D CT in maxillofacial surgery and its benefits in planning surgery are discussed.

  9. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  10. Imaging detection of new HCCs in cirrhotic patients treated with different techniques: Comparison of conventional US, spiral CT, and 3-dimensional contrast-enhanced US with the Navigator technique (Nav 3D CEUS)☆

    PubMed Central

    Giangregorio, F.; Comparato, G.; Marinone, M.G.; Di Stasi, M.; Sbolli, G.; Aragona, G.; Tansini, P.; Fornari, F.

    2009-01-01

    Introduction The commercially available Navigator system© (Esaote, Italy) allows easy 3D reconstruction of a single 2D acquisition of contrast-enhanced US (CEUS) imaging of the whole liver (with volumetric correction provided by the electromagnetic device of the Navigator©). The aim of our study was to compare the efficacy of this panoramic technique (Nav 3D CEUS) with that of conventional US and spiral CT in the detection of new hepatic lesions in patients treated for hepatocellular carcinoma (HCC). Materials and methods From November 2006 to May 2007, we performed conventional US, Nav 3D CEUS, and spiral CT on 72 cirrhotic patients previously treated for 1 or more HCCs (M/F: 38/34; all HCV-positive; Child: A/B 58/14) (1 examination: 48 patients; 2 examinations: 20 patients; 3 examinations: 4 patients). Nav 3D CEUS was performed with SonoVue© (Bracco, Milan, Italy) as a contrast agent and Technos MPX© scanner (Esaote, Genoa, Italy). Sensitivity, specificity, diagnostic accuracy, and positive and negative predictive values (PPV and NPV, respectively) were evaluated. Differences between the techniques were assessed with the chi-square test (SPSS release-15). Results Definitive diagnoses (based on spiral CT and additional follow-up) were: 6 cases of local recurrence (LocRecs) in 4 patients, 49 new nodules >2 cm from a treated nodule (NewNods) in 34 patients, and 10 cases of multinodular recurrence consisting of 4 or more nodules (NewMulti). The remaining 24 patients (22 treated for 1–3 nodules, 2 treated for >3 nodules) remained recurrence-free. Conventional US correctly detected 29/49 NewNods, 9/10 NewMultis, and 3/6 LocRecs (sensitivity: 59.2%; specificity: 100%; diagnostic accuracy: 73.6%; PPV: 100%; NPV: 70.1%). Spiral CT detected 42/49 NewNods plus 1 that was a false positive, 9/10 NewMultis, and all 6 LocRecs (sensitivity: 85.7%; specificity: 95.7%; diagnostic accuracy: 90.9%; PPV: 97.7%; NPV: 75.9%). 3D NAV results were: 46N (+9 multinodularN and 6 LR

  11. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  12. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  13. [Volumetric CT scanning: 2D and 3D reconstruction].

    PubMed

    Ferretti, G-R; Jankowski, A

    2010-12-01

    This review aims to present the 2D and 3D reconstructions derived from high-resolution volume CT acquisitions and to illustrate their thoracic applications, as well as showing the interest and limitations of these techniques. We present new applications for computer-assisted detection (CAD) and tools for quantification of pulmonary lesions. Copyright © 2010 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  14. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  15. Simulation and experimental studies of three-dimensional (3D) image reconstruction from insufficient sampling data based on compressed-sensing theory for potential applications to dental cone-beam CT

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Lee, M. S.; Cho, H. S.; Hong, D. K.; Park, Y. O.; Park, C. K.; Cho, H. M.; Choi, S. I.; Woo, T. H.

    2015-06-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient sampling data. In computed tomography (CT), for example, image reconstruction from sparse views and/or limited-angle (<360°) views would enable fast scanning with reduced imaging doses to the patient. In this study, we investigated and implemented a reconstruction algorithm based on the compressed-sensing (CS) theory, which exploits the sparseness of the gradient image with substantially high accuracy, for potential applications to low-dose, high-accurate dental cone-beam CT (CBCT). We performed systematic simulation works to investigate the image characteristics and also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images of superior accuracy from insufficient sampling data and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from insufficient data indicate that the CS-based algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.

  16. 3-D imaging of the CNS.

    PubMed

    Runge, V M; Gelblum, D Y; Wood, M L

    1990-01-01

    3-D gradient echo techniques, and in particular FLASH, represent a significant advance in MR imaging strategy allowing thin section, high resolution imaging through a large region of interest. Anatomical areas of application include the brain, spine, and extremities, although the majority of work to date has been performed in the brain. Superior T1 contrast and thus sensitivity to the presence of GdDTPA is achieved with 3-D FLASH when compared to 2-D spin echo technique. There is marked arterial and venous enhancement following Gd DTPA administration on 3-D FLASH, a less common finding with 2-D spin echo. Enhancement of the falx and tentorium is also more prominent. From a single data acquisition, requiring less than 11 min of scan time, high resolution reformatted sagittal, coronal, and axial images can obtained in addition to sections in any arbitrary plane. Tissue segmentation techniques can be applied and lesions displayed in three dimensions. These results may lead to the replacement of 2-D spin echo with 3-D FLASH for high resolution T1-weighted MR imaging of the CNS, particularly in the study of mass lesions and structural anomalies. The application of similar T2-weighted gradient echo techniques may follow, however the signal-to-noise ratio which can be achieved remains a potential limitation.

  17. 3-D Image of Vesta Eastern Hemisphere

    NASA Image and Video Library

    2012-01-23

    This anaglyph shows the topography of Vesta eastern hemisphere; equatorial troughs are visible around asteroid Vesta equator and north of these troughs there are a number of highly degraded, old, large craters. You need 3-D glasses to view this image.

  18. A minimally interactive method to segment enlarged lymph nodes in 3D thoracic CT images using a rotatable spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.

    2012-03-01

    Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.

  19. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  20. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Computer-assisted quantification of the skull deformity for craniosynostosis from 3D head CT images using morphological descriptor and hierarchical classification

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Shim, Kyu Won; Kim, Yong Oock

    2017-03-01

    This paper proposes morphological descriptors representing the degree of skull deformity for craniosynostosis in head CT images and a hierarchical classifier model distinguishing among normal and different types of craniosynostosis. First, to compare deformity surface model with mean normal surface model, mean normal surface models are generated for each age range and the mean normal surface model is deformed to the deformity surface model via multi-level threestage registration. Second, four shape features including local distance and area ratio indices are extracted in each five cranial bone. Finally, hierarchical SVM classifier is proposed to distinguish between the normal and deformity. As a result, the proposed method showed improved classification results compared to traditional cranial index. Our method can be used for the early diagnosis, surgical planning and postsurgical assessment of craniosynostosis as well as quantitative analysis of skull deformity.

  2. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  3. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  4. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  5. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  6. Computational modeling of flow-induced shear stresses within 3D salt-leached porous scaffolds imaged via micro-CT.

    PubMed

    Voronov, Roman; Vangordon, Samuel; Sikavitsas, Vassilios I; Papavassiliou, Dimitrios V

    2010-05-07

    Flow-induced shear stresses have been found to be a stimulatory factor in pre-osteoblastic cells seeded in 3D porous scaffolds and cultured under continuous flow perfusion. However, due to the complex internal structure of porous scaffolds, analytical estimation of the local shear forces is impractical. The primary goal of this work is to investigate the shear stress distributions within Poly(l-lactic acid) scaffolds via computation. Scaffolds used in this study are prepared via salt leeching with various geometric characteristics (80-95% porosity and 215-402.5microm average pore size). High resolution micro-computed tomography is used to obtain their 3D structure. Flow of osteogenic media through the scaffolds is modeled via lattice Boltzmann method. It is found that the surface stress distributions within the scaffolds are characterized by long tails to the right (a positive skewness). Their shape is not strongly dependent on the scaffold manufacturing parameters, but the magnitudes of the stresses are. Correlations are prepared for the estimation of the average surface shear stress experienced by the cells within the scaffolds and of the probability density function of the surface stresses. Though the manufacturing technique does not appear to affect the shape of the shear stress distributions, presence of manufacturing defects is found to be significant: defects create areas of high flow and high stress along their periphery. The results of this study are applicable to other polymer systems provided that they are manufactured by a similar salt leeching technique, while the imaging/modeling approach is applicable to all scaffolds relevant to tissue engineering. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Reliability of the Planned Pedicle Screw Trajectory versus the Actual Pedicle Screw Trajectory using Intra-operative 3D CT and Image Guidance

    PubMed Central

    Ledonio, Charles G.; Hunt, Matthew A.; Siddiq, Farhan; Polly, David W.

    2016-01-01

    Background Technological advances, including navigation, have been made to improve safety and accuracy of pedicle screw fixation. We evaluated the accuracy of the virtual screw placement (Stealth projection) compared to actual screw placement (intra-operative O-Arm) and examined for differences based on the distance from the reference frame. Methods A retrospective evaluation of prospectively collected data was conducted from January 2013 to September 2013. We evaluated thoracic and lumbosacral pedicle screws placed using intraoperative O-arm and Stealth navigation by obtaining virtual screw projections and intraoperative O-arm images after screw placement. The screw trajectory angle to the midsagittal line and superior endplate was compared in the axial and sagittal views, respectively. Percent error and paired t-test statistics were then performed. Results Thirty-one patients with 240 pedicle screws were analyzed. The mean angular difference between the virtual and actual image in all screws was 2.17° ± 2.20° on axial images and 2.16° ± 2.24° on sagittal images. There was excellent agreement between actual and virtual pedicle screw trajectories in the axial and sagittal plane with ICC = 0.99 (95%CI: 0.992-0.995) (p<0.001) and ICC= 0.81 (95%CI: 0.759-0.855) (p<0.001) respectively. When comparing thoracic and lumbar screws, there was a significant difference in the sagittal angulation between the two distributions. No statistical differences were found distance from the reference frame. Conclusion The virtual projection view is clinically accurate compared to the actual placement on intra-operative CT in both the axial and sagittal views. There is slight imprecision (~2°) in the axial and sagittal planes and a minor difference in the sagittal thoracic and lumbar angulation, although these did not affect clinical outcomes. In general, we find that pedicle screw placement using intraoperative cone beam CT and navigation to be accurate and reliable, and as such

  8. Aortic valve and ascending aortic root modeling from 3D and 3D+t CT

    NASA Astrophysics Data System (ADS)

    Grbic, Saša; Ionasec, Razvan I.; Zäuner, Dominik; Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin

    2010-02-01

    Aortic valve disorders are the most frequent form of valvular heart disorders (VHD) affecting nearly 3% of the global population. A large fraction among them are aortic root diseases, such as aortic root aneurysm, often requiring surgical procedures (valve-sparing) as a treatment. Visual non-invasive assessment techniques could assist during pre-selection of adequate patients, planning procedures and afterward evaluation of the same. However state of the art approaches try to model a rather short part of the aortic root, insufficient to assist the physician during intervention planning. In this paper we propose a novel approach for morphological and functional quantification of both the aortic valve and the ascending aortic root. A novel physiological shape model is introduced, consisting of the aortic valve root, leaflets and the ascending aortic root. The model parameters are hierarchically estimated using robust and fast learning-based methods. Experiments performed on 63 CT sequences (630 Volumes) and 20 single phase CT volumes demonstrated an accuracy of 1.45mm and an performance of 30 seconds (3D+t) for this approach. To the best of our knowledge this is the first time a complete model of the aortic valve (including leaflets) and the ascending aortic root, estimated from CT, has been proposed.

  9. Multimodal 3D PET/CT system for bronchoscopic procedure planning

    NASA Astrophysics Data System (ADS)

    Cheirsilp, Ronnarit; Higgins, William E.

    2013-02-01

    Integrated positron emission tomography (PET) / computed-tomography (CT) scanners give 3D multimodal data sets of the chest. Such data sets offer the potential for more complete and specific identification of suspect lesions and lymph nodes for lung-cancer assessment. This in turn enables better planning of staging bronchoscopies. The richness of the data, however, makes the visualization and planning process difficult. We present an integrated multimodal 3D PET/CT system that enables efficient region identification and bronchoscopic procedure planning. The system first invokes a series of automated 3D image-processing methods that construct a 3D chest model. Next, the user interacts with a set of interactive multimodal graphical tools that facilitate procedure planning for specific regions of interest (ROIs): 1) an interactive region candidate list that enables efficient ROI viewing in all tools; 2) a virtual PET-CT bronchoscopy rendering with SUV quantitative visualization to give a "fly through" endoluminal view of prospective ROIs; 3) transverse, sagittal, coronal multi-planar reformatted (MPR) views of the raw CT, PET, and fused CT-PET data; and 4) interactive multimodal volume/surface rendering to give a 3D perspective of the anatomy and candidate ROIs. In addition the ROI selection process is driven by a semi-automatic multimodal method for region identification. In this way, the system provides both global and local information to facilitate more specific ROI identification and procedure planning. We present results to illustrate the system's function and performance.

  10. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  11. A statistical description of 3D lung texture from CT data

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Paul, Andreas

    2015-03-01

    A method was described to create a statistical description of 3D lung texture from CT data. The second order statistics, i.e. the gray level co-occurrence matrix (GLCM), has been applied to characterize texture of lung by defining the joint probability distribution of pixel pairs. The required GLCM was extended to three-dimensional image regions to deal with CT volume data. For a fine-scale lung segmentation, both the 3D GLCM of lung and thorax without lung are required. Once the co-occurrence densities are measured, the 3D models of the joint probability density function for each describing direction of involving voxel pairs and for each class (lung or thorax) are estimated using mixture of Gaussians through the expectation-maximization algorithm. This leads to a feature space that describes the 3D lung texture.

  12. Feasibility of CT-based intraoperative 3D stereotactic image-guided navigation in the upper cervical spine of children 10 years of age or younger: initial experience.

    PubMed

    Kovanda, Timothy J; Ansari, Shaheryar F; Qaiser, Rabia; Fulkerson, Daniel H

    2015-07-24

    OBJECT Rigid screw fixation may be technically difficult in the upper cervical spine of young children. Intraoperative stereotactic navigation may potentially assist a surgeon in precise placement of screws in anatomically challenging locations. Navigation may also assist in defining abnormal anatomy. The object of this study was to evaluate the authors' initial experience with the feasibility and accuracy of this technique, both for resection and for screw placement in the upper cervical spine in younger children. METHODS Eight consecutive pediatric patients 10 years of age or younger underwent upper cervical spine surgery aided by image-guided navigation. The demographic, surgical, and clinical data were recorded. Screw position was evaluated with either an intraoperative or immediately postoperative CT scan. RESULTS One patient underwent navigation purely for guidance of bony resection. A total of 14 navigated screws were placed in the other 7 patients, including 5 C-2 pedicle screws. All 14 screws were properly positioned, defined as the screw completely contained within the cortical bone in the expected trajectory. There were no immediate complications associated with navigation. CONCLUSIONS Image-guided navigation is feasible within the pediatric cervical spine and may be a useful surgical tool for placing screws in a patient with small, often difficult bony anatomy. The authors describe their experience with their first 8 pediatric patients who underwent navigation in cervical spine surgery. The authors highlight differences in technique compared with similar navigation in adults.

  13. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    PubMed

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  14. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  15. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  16. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  17. Feasibility of 3D harmonic contrast imaging.

    PubMed

    Voormolen, M M; Bouakaz, A; Krenning, B J; Lancée, C T; ten Cate, F J; de Jong, N

    2004-04-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suitable for contrast imaging. In this study the feasibility of 3D harmonic contrast imaging is evaluated in vitro. A commercially available tissue mimicking flow phantom was used in combination with Sonovue. Backscatter power spectra from a tissue and contrast region of interest were calculated from recorded radio frequency data. The spectra and the extracted contrast to tissue ratio from these spectra were used to optimize the excitation frequency, the pulse length and the receive filter settings of the transducer. Frequencies ranging from 1.66 to 2.35 MHz and pulse lengths of 1.5, 2 and 2.5 cycles were explored. An increase of more than 15 dB in the contrast to tissue ratio was found around the second harmonic compared with the fundamental level at an optimal excitation frequency of 1.74 MHz and a pulse length of 2.5 cycles. Using the optimal settings for 3D harmonic contrast recordings volume measurements of a left ventricular shaped agar phantom were performed. Without contrast the extracted volume data resulted in a volume error of 1.5%, with contrast an accuracy of 3.8% was achieved. The results show the feasibility of accurate volume measurements from 3D harmonic contrast images. Further investigations will include the clinical evaluation of the presented technique for improved assessment of the heart.

  18. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  19. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  20. Two-alternative forced-choice evaluation of 3D CT angiograms

    NASA Astrophysics Data System (ADS)

    Habets, Damiaan F.; Chapman, Brian E.; Fox, Allan J.; Hyde, Derek E.; Holdsworth, David W.

    2001-06-01

    This study describes the development and evaluation of an appropriate methodology to study observer performance when comparing 2D and 3D angiographic techniques. 3D-CT angiograms were obtained from patients with cerebral aneurysms or occlusive carotid artery disease and perspective rendering of this 3D data was performed to produce maximum intensity projections (MIP) at view angles identical to digital subtraction angiography (DSA) images. Two-alternative-forced-choice methodology (2AFC) was then used to determine the percent correct (Pc), which is equivalent to the area Az under the receiver-operating characteristic (RTOC) curve. In a comparison of CRA MIP images and DSA images of the intracranial vasculature, the average value of Pc was 0.90+/- 0.03. Perspective reprojection produces digitally reconstructed radiographs (DRRs) with image quality that is nearly equivalent to conventional DSA, with the additional clinical advantage of providing digitally reconstructed images at an unlimited number of viewing angles.

  1. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  2. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  3. Signal subspace registration of 3D images

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad

    1998-06-01

    This paper addresses the problem of fusing the information content of two uncalibrated sensors. This problem arises in registering images of a scene when it is viewed via two different sensory systems, or detecting change in a scene when it is viewed at two different time points by a sensory system (or via two different sensory systems or observation channels). We are concerned with sensory systems which have not only a relative shift, scaling and rotational calibration error, but also an unknown point spread function (that is time-varying for a single sensor, or different for two sensors). By modeling one image in terms of an unknown linear combination of the other image, its powers and their spatially-transformed (shift, rotation and scaling) versions, a signal subspace processing is developed for fusing uncalibrated sensors. Numerical results with realistic 3D magnetic resonance images of a patient with multiple sclerosis, which are acquired at two different time points, are provided.

  4. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  5. A comparison of 3D poly(ε-caprolactone) tissue engineering scaffolds produced with conventional and additive manufacturing techniques by means of quantitative analysis of SR μ-CT images

    NASA Astrophysics Data System (ADS)

    Brun, F.; Intranuovo, F.; Mohammadi, S.; Domingos, M.; Favia, P.; Tromba, G.

    2013-07-01

    The technique used to produce a 3D tissue engineering (TE) scaffold is of fundamental importance in order to guarantee its proper morphological characteristics. An accurate assessment of the resulting structural properties is therefore crucial in order to evaluate the effectiveness of the produced scaffold. Synchrotron radiation (SR) computed microtomography (μ-CT) combined with further image analysis seems to be one of the most effective techniques to this aim. However, a quantitative assessment of the morphological parameters directly from the reconstructed images is a non trivial task. This study considers two different poly(ε-caprolactone) (PCL) scaffolds fabricated with a conventional technique (Solvent Casting Particulate Leaching, SCPL) and an additive manufacturing (AM) technique (BioCell Printing), respectively. With the first technique it is possible to produce scaffolds with random, non-regular, rounded pore geometry. The AM technique instead is able to produce scaffolds with square-shaped interconnected pores of regular dimension. Therefore, the final morphology of the AM scaffolds can be predicted and the resulting model can be used for the validation of the applied imaging and image analysis protocols. It is here reported a SR μ-CT image analysis approach that is able to effectively and accurately reveal the differences in the pore- and throat-size distributions as well as connectivity of both AM and SCPL scaffolds.

  6. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  7. Preoperative dual-phase 3D CT angiography assessment of the right hepatic artery before gastrectomy.

    PubMed

    Yamashita, Keishi; Sakuramoto, Shinichi; Mieno, Hiroaki; Shibata, Tomotaka; Nemoto, Masayuki; Katada, Natsuya; Kikuchi, Shiro; Watanabe, Masahiko

    2014-10-01

    In the current study, we evaluated the efficacy of dual-phase three-dimensional (3D) CT angiography (CTA) in the assessment of the vascular anatomy, especially the right hepatic artery (RHA), before gastrectomy. The study initially included 714 consecutive patients being treated for gastric cancer. A dual-phase contrast-enhanced CT scan using 32-multi detector-row CT was performed for all patients. Among the 714 patients, 3D CTA clearly identified anomalies with the RHA arising from the superior mesenteric artery (SMA) in 49 cases (6.9 %). In Michels' classification type IX, the common hepatic artery (CHA) originates only from the SMA. Such cases exhibit defective anatomy for the CHA in conjunction with the celiac-splenic artery system, resulting in direct exposure of the portal vein beneath the #8a lymph node station, which was retrospectively confirmed by video in laparoscopic gastrectomy cases. Fused images of both 3D angiography and venography were obtained, and could have predicted the risk preoperatively, and the surgical finding confirmed its usefulness. Preoperative evaluations using 3D CTA can provide more accurate information about the vessel anatomy. The fused images from 3D CTA have the potential to reduce the intraoperative risks for injuries to critical vessel, such as the portal vein, during gastrectomy.

  8. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  9. High-pitch spiral CT with 3D reformation: an alternative choice for imaging vascular anomalies with affluent blood flow in the head and neck of infants and children

    PubMed Central

    Li, H-O; Huo, R; Xu, G-Q; Duan, Y-H; Nie, P; Ji, X-P; Cheng, Z-P; Xu, Z-D

    2015-01-01

    Objective: To evaluate the feasibility of high-pitch spiral CT in imaging vascular anomalies (VAs) with affluent blood flow in the head and neck of infants and children. Methods: For patients with suspected VAs and affluent blood flow pre-detected by ultrasound, CT was performed with high-pitch mode, individualized low-dose scan protocol and three-dimensional (3D) reformation. A five-point scale was used for image quality evaluation. Diagnostic accuracy was calculated with clinical diagnosis with/without pathological results as the reference standard. Radiation exposure and single-phase scan time were recorded. Treatment strategies were formulated based on CT images and results and were monitored through follow-up results. Results: 20 lesions were identified in 15 patients (median age of 11 months). The mean score of image quality was 4.13 ± 0.74. 7 patients (7/15, 46.67%) were diagnosed with haemangiomas, 6 patients (6/15, 40%) were diagnosed with venous malformations and 2 patients (2/15, 13.33%) were diagnosed with arteriovenous malformations. The average effective radiation doses of a single phase and of the total procedure were 0.27 ± 0.08 and 0.86 ± 0.21 mSv. The average scanning time of a single phase was 0.46 ± 0.09 s. After treatment, 13 patients (13/15, 86.67%) achieved excellent results, and 2 patients (2/15, 13.33%) showed good results in follow-up visits. Conclusion: High-pitch spiral CT with an individualized low-dose scan protocol and 3D reformation is an effective modality for imaging VAs with affluent blood flow in the head and neck of infants and children when vascular details are needed and ultrasound and MRI could not provide the complete information. Advances in knowledge: This study proposes an alternative modality for imaging VAs with affluent blood flow. PMID:26055504

  10. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  11. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  12. 3D strain measurement in soft tissue: demonstration of a novel inverse finite element model algorithm on MicroCT images of a tissue phantom exposed to negative pressure wound therapy.

    PubMed

    Wilkes, R; Zhao, Y; Cunningham, K; Kieswetter, K; Haridas, B

    2009-07-01

    This study describes a novel system for acquiring the 3D strain field in soft tissue at sub-millimeter spatial resolution during negative pressure wound therapy (NPWT). Recent research in advanced wound treatment modalities theorizes that microdeformations induced by the application of sub-atmospheric (negative) pressure through V.A.C. GranuFoam Dressing, a reticulated open-cell polyurethane foam (ROCF), is instrumental in regulating the mechanobiology of granulation tissue formation [Saxena, V., Hwang, C.W., Huang, S., Eichbaum, Q., Ingber, D., Orgill, D.P., 2004. Vacuum-assisted closure: Microdeformations of wounds and cell proliferation. Plast. Reconstr. Surg. 114, 1086-1096]. While the clinical response is unequivocal, measurement of deformations at the wound-dressing interface has not been possible due to the inaccessibility of the wound tissue beneath the sealed dressing. Here we describe the development of a bench-test wound model for microcomputed tomography (microCT) imaging of deformation induced by NPWT and an algorithm set for quantifying the 3D strain field at sub-millimeter resolution. Microdeformations induced in the tissue phantom revealed average tensile strains of 18%-23% at sub-atmospheric pressures of -50 to -200 mmHg (-6.7 to -26.7 kPa). The compressive strains (22%-24%) and shear strains (20%-23%) correlate with 2D FEM studies of microdeformational wound therapy in the reference cited above. We anticipate that strain signals quantified using this system can then be used in future research aimed at correlating the effects of mechanical loading on the phenotypic expression of dermal fibroblasts in acute and chronic ulcer models. Furthermore, the method developed here can be applied to continuum deformation analysis in other contexts, such as 3D cell culture via confocal microscopy, full scale CT and MRI imaging, and in machine vision.

  13. Novel 3D stereoscopic imaging technology

    NASA Astrophysics Data System (ADS)

    Faris, Sadeg M.

    1994-04-01

    Numerous 3-D stereoscopic techniques have been explored. These previous techniques have had shortcomings precluding them from making stereoscopic imaging pervasive in mainstream applications. In the last decade, several enabling technologies have emerged and have become available and affordable. They make it possible now to realize the near-ideal stereoscopic imaging technology that can be made available to the masses making possible the inevitable transition from flat imaging to stereoscopic imaging. The ideal stereoscopic technology must meet four important criteria: (1) high stereoscopic image quality; (2) affordability; (3) compatibility with existing infrastructure, e.g., NTSC video, PC, and other devices; and (4) general purpose characteristics, e.g., the ability to produce electronic displays, hard-copy printing and capturing stereoscopic images on film and stored electronically. In section 2, an overview of prior art technologies is given highlighting their advantages and disadvantages. In section 3, the novel (mu) PolTM stereoscopic technology is described making the case that it meets the four criteria for realizing the inevitable transition from flat to stereoscopic imaging for mass applications.

  14. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  15. Crouzon syndrome associated with acanthosis nigricans: prenatal 2D and 3D ultrasound findings and postnatal 3D CT findings

    PubMed Central

    Nørgaard, Pernille; Hagen, Casper Petri; Hove, Hanne; Dunø, Morten; Nissen, Kamilla Rothe; Kreiborg, Sven; Jørgensen, Finn Stener

    2012-01-01

    Crouzon syndrome with acanthosis nigricans (CAN) is a very rare condition with an approximate prevalence of 1 per 1 million newborns. We add the first report on prenatal 2D and 3D ultrasound findings in CAN. In addition we present the postnatal 3D CT findings. The diagnosis was confirmed by molecular testing. PMID:23986840

  16. Crouzon syndrome associated with acanthosis nigricans: prenatal 2D and 3D ultrasound findings and postnatal 3D CT findings.

    PubMed

    Nørgaard, Pernille; Hagen, Casper Petri; Hove, Hanne; Dunø, Morten; Nissen, Kamilla Rothe; Kreiborg, Sven; Jørgensen, Finn Stener

    2012-01-01

    Crouzon syndrome with acanthosis nigricans (CAN) is a very rare condition with an approximate prevalence of 1 per 1 million newborns. We add the first report on prenatal 2D and 3D ultrasound findings in CAN. In addition we present the postnatal 3D CT findings. The diagnosis was confirmed by molecular testing.

  17. Novel Approaches in 3D Sensing, Imaging, and Visualization

    NASA Astrophysics Data System (ADS)

    Schulein, Robert; Daneshpanah, M.; Cho, M.; Javidi, B.

    Three-dimensional (3D) imaging systems are being researched extensively for purposes of sensing and visualization in fields as diverse as defense, medical, art, and entertainment. When compared to traditional 2D imaging techniques, 3D imaging offers advantages in ranging, robustness to scene occlusion, and target recognition performance. Amongst the myriad 3D imaging techniques, 3D multiperspective imaging technologies have received recent attention due to the technologies' relatively low cost, scalability, and passive sensing capabilities. Multiperspective 3D imagers collect 3D scene information by recording 2D intensity information from multiple perspectives, thus retaining both ray intensity and angle information. Three novel developments in 3D sensing, imaging, and visualization systems are presented: 3D imaging with axially distributed sensing, 3D optical profilometry, and occluded 3D object tracking.

  18. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    PubMed

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  19. Digimouse: a 3D whole body mouse atlas from CT and cryosection data

    NASA Astrophysics Data System (ADS)

    Dogdas, Belma; Stout, David; Chatziioannou, Arion F.; Leahy, Richard M.

    2007-02-01

    We have constructed a three-dimensional (3D) whole body mouse atlas from coregistered x-ray CT and cryosection data of a normal nude male mouse. High quality PET, x-ray CT and cryosection images were acquired post mortem from a single mouse placed in a stereotactic frame with fiducial markers visible in all three modalities. The image data were coregistered to a common coordinate system using the fiducials and resampled to an isotropic 0.1 mm voxel size. Using interactive editing tools we segmented and labelled whole brain, cerebrum, cerebellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands, heart, lungs, liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes, bladder, skeleton and skin surface. The final atlas consists of the 3D volume, in which the voxels are labelled to define the anatomical structures listed above, with coregistered PET, x-ray CT and cryosection images. To illustrate use of the atlas we include simulations of 3D bioluminescence and PET image reconstruction. Optical scatter and absorption values are assigned to each organ to simulate realistic photon transport within the animal for bioluminescence imaging. Similarly, 511 keV photon attenuation values are assigned to each structure in the atlas to simulate realistic photon attenuation in PET. The Digimouse atlas and data are available at http://neuroimage.usc.edu/Digimouse.html.

  20. Digimouse: a 3D whole body mouse atlas from CT and cryosection data.

    PubMed

    Dogdas, Belma; Stout, David; Chatziioannou, Arion F; Leahy, Richard M

    2007-02-07

    We have constructed a three-dimensional (3D) whole body mouse atlas from coregistered x-ray CT and cryosection data of a normal nude male mouse. High quality PET, x-ray CT and cryosection images were acquired post mortem from a single mouse placed in a stereotactic frame with fiducial markers visible in all three modalities. The image data were coregistered to a common coordinate system using the fiducials and resampled to an isotropic 0.1 mm voxel size. Using interactive editing tools we segmented and labelled whole brain, cerebrum, cerebellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands, heart, lungs, liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes, bladder, skeleton and skin surface. The final atlas consists of the 3D volume, in which the voxels are labelled to define the anatomical structures listed above, with coregistered PET, x-ray CT and cryosection images. To illustrate use of the atlas we include simulations of 3D bioluminescence and PET image reconstruction. Optical scatter and absorption values are assigned to each organ to simulate realistic photon transport within the animal for bioluminescence imaging. Similarly, 511 keV photon attenuation values are assigned to each structure in the atlas to simulate realistic photon attenuation in PET. The Digimouse atlas and data are available at http://neuroimage.usc.edu/Digimouse.html.

  1. Thoracic cavity definition for 3D PET/CT analysis and visualization.

    PubMed

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W; Higgins, William E

    2015-07-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment.

  2. Thoracic Cavity Definition for 3D PET/CT Analysis and Visualization

    PubMed Central

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W.; Higgins, William E.

    2015-01-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical detail on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage = 99.2% and leakage = 0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. PMID:25957746

  3. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement

    PubMed Central

    Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae

    2016-01-01

    Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction. PMID:28070517

  4. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  5. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  6. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  7. Microstructure analysis of the secondary pulmonary lobules by 3D synchrotron radiation CT

    NASA Astrophysics Data System (ADS)

    Fukuoka, Y.; Kawata, Y.; Niki, N.; Umetani, K.; Nakano, Y.; Ohmatsu, H.; Moriyama, N.; Itoh, H.

    2014-03-01

    Recognition of abnormalities related to the lobular anatomy has become increasingly important in the diagnosis and differential diagnosis of lung abnormalities at clinical routines of CT examinations. This paper aims a 3-D microstructural analysis of the pulmonary acinus with isotropic spatial resolution in the range of several micrometers by using micro CT. Previously, we demonstrated the ability of synchrotron radiation micro CT (SRμCT) using offset scan mode in microstructural analysis of the whole part of the secondary pulmonary lobule. In this paper, we present a semiautomatic method to segment the acinar and subacinar airspaces from the secondary pulmonary lobule and to track small vessels running inside alveolar walls in human acinus imaged by the SRμCT. The method beains with and segmentation of the tissues such as pleural surface, interlobular septa, alveola wall, or vessel using a threshold technique and 3-D connected component analysis. 3-D air space are then conustructed separated by tissues and represented branching patterns of airways and airspaces distal to the terminal bronchiole. A graph-partitioning approach isolated acini whose stems are interactively defined as the terminal bronchiole in the secondary pulmonary lobule. Finally, we performed vessel tracking using a non-linear sate space which captures both smoothness of the trajectories and intensity coherence along vessel orientations. Results demonstrate that the proposed method can extract several acinar airspaces from the 3-D SRμCT image of secondary pulmonary lobule and that the extracted acinar airspace enable an accurate quantitative description of the anatomy of the human acinus for interpretation of the basic unit of pulmonary structure and function.

  8. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  9. High-Resolution Imaged-Based 3D Reconstruction Combined with X-Ray CT Data Enables Comprehensive Non-Destructive Documentation and Targeted Research of Astromaterials

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2014-01-01

    Providing web-based data of complex and sensitive astromaterials (including meteorites and lunar samples) in novel formats enhances existing preliminary examination data on these samples and supports targeted sample requests and analyses. We have developed and tested a rigorous protocol for collecting highly detailed imagery of meteorites and complex lunar samples in non-contaminating environments. These data are reduced to create interactive 3D models of the samples. We intend to provide these data as they are acquired on NASA's Astromaterials Acquisition and Curation website at http://curator.jsc.nasa.gov/.

  10. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  11. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    PubMed

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans.

  13. Insight into 3D micro-CT data: exploring segmentation algorithms through performance metrics.

    PubMed

    Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Parkinson, Dilworth; Larson, Natalie; Pelt, Daniël M; Bethel, Wes; Zok, Frank; Sethian, James

    2017-09-01

    Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM), k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

  14. Application of 3D surface imaging in breast cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja; Honnef, Joeri; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2012-02-01

    Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam computed tomography (CBCT). Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included. The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning CT surface and registered CBCT/captured surface. Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior direction, respectively. Systematic and random errors were <=0.16cm and <=0.13cm in all directions, respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE values than surface imaging registrations (p=0.003). Conclusion: This performance evaluation study shows very promising results

  15. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  16. 3D documentation and visualization of external injury findings by integration of simple photography in CT/MRI data sets (IprojeCT).

    PubMed

    Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula

    2016-05-01

    This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.

  17. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  18. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  19. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  20. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  1. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  2. Find your way with X-Ray: Using microCT to correlate in vivo imaging with 3D electron microscopy.

    PubMed

    Karreman, Matthia A; Ruthensteiner, Bernhard; Mercier, Luc; Schieber, Nicole L; Solecki, Gergely; Winkler, Frank; Goetz, Jacky G; Schwab, Yannick

    2017-01-01

    Combining in vivo imaging with electron microscopy (EM) uniquely allows monitoring rare and critical events in living tissue, followed by their high-resolution visualization in their native context. A major hurdle, however, is to keep track of the region of interest (ROI) when moving from intravital microscopy (IVM) to EM. Here, we present a workflow that relies on correlating IVM and microscopic X-ray computed tomography to predict the position of the ROI inside the EM-processed sample. The ROI can then be accurately and quickly targeted using ultramicrotomy and imaged using EM. We outline how this procedure is used to retrieve and image tumor cells arrested in the vasculature of the mouse brain. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  4. Clinical application of modern imaging technology: 3D information acquiring and image processing

    NASA Astrophysics Data System (ADS)

    Wang, Dezong

    1994-05-01

    In current clinic, pictures of B-supersonic, X-ray, X-CT and MRI are applicated widely. All of these are 2D pictures. The 3D information is blended. The blended information always leads doctors astray. If images are processed, mistakes will be reduced. In this paper the processing methods of 2D images are described. Examples of clinical applications are given. The acquiring methods of 3D information from 2D images are explained. The stereo image of liver and cancer is shown. The calculating ways of areas and volumes of liver and cancer are provided.

  5. A CCD-based optical CT scanner for high-resolution 3D imaging of radiation dose distributions: equipment specifications, optical simulations and preliminary results.

    PubMed

    Doran, S J; Koerkamp, K K; Bero, M A; Jenneson, P; Morton, E J; Gilboy, W B

    2001-12-01

    Methods based on magnetic resonance imaging for the measurement of three-dimensional distributions of radiation dose are highly developed. However, relatively little work has been done on optical computed tomography (OCT). This paper describes a new OCT scanner based on a broad beam light source and a two-dimensional charge-coupled device (CCD) detector. A number of key design features are discussed including the light source; the scanning tank, turntable and stepper motor control; the diffuser screen onto which images are projected and the detector. It is shown that the non-uniform pixel sensitivity of the low-cost CCD detector used and the granularity of the diffuser screen lead to a serious ring artefact in the reconstructed images. Methods are described for eliminating this. The problems arising from reflection and refraction at the walls of the gel container are explained. Optical ray-tracing simulations are presented for cylindrical containers with a variety of radii and verified experimentally. Small changes in the model parameters lead to large variations in the signal intensity observed in the projection data. The effect of imperfect containers on data quality is discussed and a method based on a 'correction scan' is shown to be successful in correcting many of the related image artefacts. The results of two tomography experiments are presented. In the first experiment, a radiochromic Fricke gel sample was exposed four times in different positions to a 100 kVp x-ray beam perpendicular to the plane of imaging. Images of absorbed dose with slice thickness of 140 microm were acquired. with 'true' in-plane resolution of 560 x 560 microm2 at the edge of the 72 mm field of view and correspondingly higher resolution at the centre. The nominal doses measured correlated well with the known exposure times. The second experiment demonstrated the well known phenomenon of diffusion in the dosemeter gels and yielded a value of (0.12 +/- 0.02) mm2 s(-1) for the diffusion

  6. 3D Convolutional Neural Network for Automatic Detection of Lung Nodules in Chest CT

    PubMed Central

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-01-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN. PMID:28845077

  7. 3D convolutional neural network for automatic detection of lung nodules in chest CT

    NASA Astrophysics Data System (ADS)

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-03-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  8. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  9. 3D ultrasound imaging in image-guided intervention.

    PubMed

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  10. Performance of a commercial optical CT scanner and polymer gel dosimeters for 3-D dose verification.

    PubMed

    Xu, Y; Wuu, Cheng-Shie; Maryanski, Marek J

    2004-11-01

    Performance analysis of a commercial three-dimensional (3-D) dose mapping system based on optical CT scanning of polymer gels is presented. The system consists of BANG 3 polymer gels (MGS Research, Inc., Madison, CT), OCTOPUS laser CT scanner (MGS Research, Inc., Madison, CT), and an in-house developed software for optical CT image reconstruction and 3-D dose distribution comparison between the gel, film measurements and the radiation therapy treatment plans. Various sources of image noise (digitization, electronic, optical, and mechanical) generated by the scanner as well as optical uniformity of the polymer gel are analyzed. The performance of the scanner is further evaluated in terms of the reproducibility of the data acquisition process, the uncertainties at different levels of reconstructed optical density per unit length and the effects of scanning parameters. It is demonstrated that for BANG 3 gel phantoms held in cylindrical plastic containers, the relative dose distribution can be reproduced by the scanner with an overall uncertainty of about 3% within approximately 75% of the radius of the container. In regions located closer to the container wall, however, the scanner generates erroneous optical density values that arise from the reflection and refraction of the laser rays at the interface between the gel and the container. The analysis of the accuracy of the polymer gel dosimeter is exemplified by the comparison of the gel/OCT-derived dose distributions with those from film measurements and a commercial treatment planning system (Cadplan, Varian Corporation, Palo Alto, CA) for a 6 cm x 6 cm single field of 6 MV x rays and a 3-D conformal radiotherapy (3DCRT) plan. The gel measurements agree with the treatment plans and the film measurements within the "3%-or-2 mm" criterion throughout the usable, artifact-free central region of the gel volume. Discrepancies among the three data sets are analyzed.

  11. Performance of a commercial optical CT scanner and polymer gel dosimeters for 3-D dose verification

    SciTech Connect

    Xu, Y.; Wuu, C.-S.; Maryanski, Marek J.

    2004-11-01

    Performance analysis of a commercial three-dimensional (3-D) dose mapping system based on optical CT scanning of polymer gels is presented. The system consists of BANG{sup reg}3 polymer gels (MGS Research, Inc., Madison, CT), OCTOPUS{sup TM} laser CT scanner (MGS Research, Inc., Madison, CT), and an in-house developed software for optical CT image reconstruction and 3-D dose distribution comparison between the gel, film measurements and the radiation therapy treatment plans. Various sources of image noise (digitization, electronic, optical, and mechanical) generated by the scanner as well as optical uniformity of the polymer gel are analyzed. The performance of the scanner is further evaluated in terms of the reproducibility of the data acquisition process, the uncertainties at different levels of reconstructed optical density per unit length and the effects of scanning parameters. It is demonstrated that for BANG{sup registered}3 gel phantoms held in cylindrical plastic containers, the relative dose distribution can be reproduced by the scanner with an overall uncertainty of about 3% within approximately 75% of the radius of the container. In regions located closer to the container wall, however, the scanner generates erroneous optical density values that arise from the reflection and refraction of the laser rays at the interface between the gel and the container. The analysis of the accuracy of the polymer gel dosimeter is exemplified by the comparison of the gel/OCT-derived dose distributions with those from film measurements and a commercial treatment planning system (Cadplan, Varian Corporation, Palo Alto, CA) for a 6 cmx6 cm single field of 6 MV x rays and a 3-D conformal radiotherapy (3DCRT) plan. The gel measurements agree with the treatment plans and the film measurements within the '3%-or-2 mm' criterion throughout the usable, artifact-free central region of the gel volume. Discrepancies among the three data sets are analyzed.

  12. Automatic 3D-to-2D registration for CT and dual-energy digital radiography for calcification detection

    SciTech Connect

    Chen Xiang; Gilkeson, Robert C.; Fei, Baowei

    2007-12-15

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DEDR). CT is an established tool for the detection of cardiac calcification. DEDR could be a cost-effective alternative screening tool. In order to utilize CT as the 'gold standard' to evaluate the capability of DEDR images for the detection and localization of calcium, we developed an automatic, intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DEDR images. To generate digitally reconstructed radiography (DRR) from the CT volumes, we developed several projection algorithms using the fast shear-warp method. In particular, we created a Gaussian-weighted projection for this application. We used normalized mutual information (NMI) as the similarity measurement. Simulated projection images from CT values were fused with the corresponding DEDR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with a translation difference of less than 0.8 mm and a rotation difference of less than 0.2 deg. . For physical phantom images, the registration accuracy is 0.43{+-}0.24 mm. Color overlay and 3D visualization of clinical images show that the two images registered well. The NMI values between the DRR and DEDR images improved from 0.21{+-}0.03 before registration to 0.25{+-}0.03 after registration. Registration errors measured from anatomic markers decreased from 27.6{+-}13.6 mm before registration to 2.5{+-}0.5 mm after registration. Our results show that the automatic 3D-to-2D registration is accurate and robust. This technique can provide a useful tool for correlating DEDR with CT images for screening coronary artery calcification.

  13. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  14. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures.

  15. Description of patellar movement by 3D parameters obtained from dynamic CT acquisition

    NASA Astrophysics Data System (ADS)

    de Sá Rebelo, Marina; Moreno, Ramon Alfredo; Gobbi, Riccardo Gomes; Camanho, Gilberto Luis; de Ávila, Luiz Francisco Rodrigues; Demange, Marco Kawamura; Pecora, Jose Ricardo; Gutierrez, Marco Antonio

    2014-03-01

    The patellofemoral joint is critical in the biomechanics of the knee. The patellofemoral instability is one condition that generates pain, functional impairment and often requires surgery as part of orthopedic treatment. The analysis of the patellofemoral dynamics has been performed by several medical image modalities. The clinical parameters assessed are mainly based on 2D measurements, such as the patellar tilt angle and the lateral shift among others. Besides, the acquisition protocols are mostly performed with the leg laid static at fixed angles. The use of helical multi slice CT scanner can allow the capture and display of the joint's movement performed actively by the patient. However, the orthopedic applications of this scanner have not yet been standardized or widespread. In this work we present a method to evaluate the biomechanics of the patellofemoral joint during active contraction using multi slice CT images. This approach can greatly improve the analysis of patellar instability by displaying the physiology during muscle contraction. The movement was evaluated by computing its 3D displacements and rotations from different knee angles. The first processing step registered the images in both angles based on the femuŕs position. The transformation matrix of the patella from the images was then calculated, which provided the rotations and translations performed by the patella from its position in the first image to its position in the second image. Analysis of these parameters for all frames provided real 3D information about the patellar displacement.

  16. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  17. Augmented reality 3D display based on integral imaging

    NASA Astrophysics Data System (ADS)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  18. Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

    PubMed Central

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2014-01-01

    The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~ 1–2 mm) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement (p < 0.001)in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT. PMID:23372078

  19. Evaluation of a system for high-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery.

    PubMed

    Mirota, Daniel J; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D; Ishii, Masaru; Gallia, Gary L; Taylor, Russell H; Hager, Gregory D; Siewerdsen, Jeffrey H

    2013-07-01

    The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (  ∼ 1-2 mm ) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT.

  20. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  1. Objective and subjective comparison of standard 2-D and fully 3-D reconstructed data on a PET/CT system.

    PubMed

    Strobel, Klaus; Rüdy, Matthias; Treyer, Valerie; Veit-Haibach, Patrick; Burger, Cyrill; Hany, Thomas F

    2007-07-01

    The relative advantage of fully 3-D versus 2-D mode for whole-body imaging is currently the focus of considerable expert debate. The nature of 3-D PET acquisition for FDG PET/CT theoretically allows a shorter scan time and improved efficiency of FDG use than in the standard 2-D acquisition. We therefore objectively and subjectively compared standard 2-D and fully 3-D reconstructed data for FDG PET/CT on a research PET/CT system. In a total of 36 patients (mean 58.9 years, range 17.3-78.9 years; 21 male, 15 female) referred for known or suspected malignancy, FDG PET/CT was performed using a research PET/CT system with advanced detector technology with improved sensitivity and spatial resolution. After 45 min uptake, a low-dose CT (40 mAs) from head to thigh was performed followed by 2-D PET (emission 3 min per field) and 3-D PET (emission 1.5 min per field) with both seven slices overlap to cover the identical anatomical region. Acquisition time was therefore 50% less (seven fields; 21 min vs. 10.5 min). PET data was acquired in a randomized fashion, so in 50% of the cases 2-D data was acquired first. CT data was used for attenuation correction. 2-D (OSEM) and 3-D PET images were iteratively reconstructed. Subjective analysis of 2-D and 3-D images was performed by two readers in a blinded, randomized fashion evaluating the following criteria: sharpness of organs (liver, chest wall/lung), overall image quality and detectability and dignity of each identified lesion. Objective analysis of PET data was investigated measuring maximum standard uptake value with lean body mass (SUV(max,LBM)) of identified lesions. On average, per patient, the SUV(max) was 7.86 (SD 7.79) for 2-D and 6.96 (SD 5.19) for 3-D. On a lesion basis, the average SUV(max) was 7.65 (SD 7.79) for 2-D and 6.75 (SD 5.89) for 3-D. The absolute difference on a paired t-test of SUV 3-D-2-D based on each measured lesion was significant with an average of -0.956 (P=0.002) and an average of -0.884 on a

  2. Radon transform based automatic metal artefacts generation for 3D threat image projection

    NASA Astrophysics Data System (ADS)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  3. 3D seismic imaging, example of 3D area in the middle of Banat

    NASA Astrophysics Data System (ADS)

    Antic, S.

    2009-04-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get

  4. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  5. 3D iterative full and half scan reconstruction in CT architectures with distributed sources

    NASA Astrophysics Data System (ADS)

    Iatrou, M.; De Man, B.; Beque, D.; Yin, Z.; Khare, K.; Benson, T. M.

    2008-03-01

    In 3 rd generation CT systems projection data, generated by X-rays emitted from a single source and passing through the imaged object, are acquired by a single detector covering the entire field of view (FOV). Novel CT system architectures employing distributed sources [1,2] could extend the axial coverage, while removing cone-beam artifacts and improving spatial resolution and dose. The sources can be distributed in plane and/or in the longitudinal direction. We investigate statistical iterative reconstruction of multi-axial data, acquired with simulated CT systems with multiple sources distributed along the in-plane and longitudinal directions. The current study explores the feasibility of 3D iterative Full and Half Scan reconstruction methods for CT systems with two different architectures. In the first architecture the sources are distributed in the longitudinal direction, and in the second architecture the sources are distributed both longitudinally and trans-axially. We used Penalized Weighted Least Squares Transmission Reconstruction (PWLSTR) and incorporated a projector-backprojector model matching the simulated architectures. The proposed approaches minimize artifacts related to the proposed geometries. The reconstructed images show that the investigated architectures can achieve good image quality for very large coverage without severe cone-beam artifacts.

  6. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    DTIC Science & Technology

    2008-01-01

    D. J. Hawkes, "Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery ," IEEE Transactions on Information...guided minimally invasive surgery ," Surgical Innovation, (in preparation), 2008. • O. Dandekar, W. Plishker, S. S. Bhattacharyya, and R. Shekhar... surgeries , biopsies, and therapies, have the potential to improve patient care by enabling new and faster procedures, minimizing unintended damage

  7. Research of range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Yang, Haitao; Zhao, Hongli; Youchen, Fan

    2016-10-01

    Laser image data-based target recognition technology is one of the key technologies of laser active imaging systems. This paper discussed the status quo of 3-D imaging development at home and abroad, analyzed the current technological bottlenecks, and built a prototype of range-gated systems to obtain a set of range-gated slice images, and then constructed the 3-D images of the target by binary method and centroid method, respectively, and by constructing different numbers of slice images explored the relationship between the number of images and the reconstruction accuracy in the 3-D image reconstruction process. The experiment analyzed the impact of two algorithms, binary method and centroid method, on the results of 3-D image reconstruction. In the binary method, a comparative analysis was made on the impact of different threshold values on the results of reconstruction, where 0.1, 0.2, 0.3 and adaptive threshold values were selected for 3-D reconstruction of the slice images. In the centroid method, 15, 10, 6, 3, and 2 images were respectively used to realize 3-D reconstruction. Experimental results showed that with the same number of slice images, the accuracy of centroid method was higher than the binary algorithm, and the binary algorithm had a large dependence on the selection of threshold; with the number of slice images dwindling, the accuracy of images reconstructed by centroid method continued to reduce, and at least three slice images were required in order to obtain one 3-D image.

  8. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360° rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  9. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy.

    PubMed

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-21

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  10. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  11. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  12. 3D Reconstruction from a Single Image

    DTIC Science & Technology

    2008-08-01

    ITS APPLICATIONS UNIVERSITY OF MINNESOTA 400 Lind Hall 207 Church Street S.E. Minneapolis, Minnesota 55455–0436 Phone: 612-624-6066 Fax: 612-626-7370...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Minnesota ,Institute for Mathematics and Its Applications,Minneapolis,MN,55455-0436 8...accurately learn 3D priors using a single camera and the Radon transform. While we could certainly use this method in the work here presented (the

  13. Development of 3D-CT System Using MIRRORCLE-6X

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Takaku, J.; Hirai, T.; Yamada, H.

    2007-03-01

    The technique of computed tomography (CT) has been used in various fields, such as medical, non-destructive testing (NDT), baggage checking, etc. A 3D-CT system based on the portable synchrotron "MIRRORCLE"-series will be a novel instrument for these fields. The hard x-rays generated from the "MIRRORCLE" have a wide energy spectrum. Light and thin materials create absorption and refraction contrast in x-ray images by the lower energy component (< 60 keV), and heavy and thick materials create absorption contrast by the higher energy component. In addition, images with higher resolutions can be obtained using "MIRRORCLE" with a small source size of micron order. Thus, high resolution 3D-CT images of specimens containing both light and heavy materials can be obtained using "MIRRORCLE" and a 2D-detector with a wide dynamic range. In this paper, the development and output of a 3D-CT system using the "MIRRORCLE-6X" and a flat panel detector are reported. A 3D image of a piece of concrete was obtained. The detector was a flat panel detector (VARIAN, PAXSCAN2520) with 254 μm pixel size. The object and the detector were set at 50 cm and 250 cm respectively from the x-ray source, so that the magnification was 5x. The x-ray source was a 50 μm Pt rod. The rotation stage and the detector were remote-controlled using a computer, which was originally created using LabView and Visual Basic software. The exposure time was about 20 minutes. The reconstruction calculation was based on the Feldkamp algorithm, and the pixel size was 50 μm. We could observe sub-mm holes and density differences in the object. Thus, the "MIRRORCLE-CV" with 1MeV electron energy, which has same x-ray generation principles, will be an excellent x-ray source for medical diagnostics and NDT.

  14. Development of 3D-CT System Using MIRRORCLE-6X

    SciTech Connect

    Sasaki, M.; Yamada, H.; Takaku, J.; Hirai, T.

    2007-03-30

    The technique of computed tomography (CT) has been used in various fields, such as medical, non-destructive testing (NDT), baggage checking, etc. A 3D-CT system based on the portable synchrotron 'MIRRORCLE'-series will be a novel instrument for these fields. The hard x-rays generated from the 'MIRRORCLE' have a wide energy spectrum. Light and thin materials create absorption and refraction contrast in x-ray images by the lower energy component (< 60 keV), and heavy and thick materials create absorption contrast by the higher energy component. In addition, images with higher resolutions can be obtained using 'MIRRORCLE' with a small source size of micron order. Thus, high resolution 3D-CT images of specimens containing both light and heavy materials can be obtained using 'MIRRORCLE' and a 2D-detector with a wide dynamic range. In this paper, the development and output of a 3D-CT system using the 'MIRRORCLE-6X' and a flat panel detector are reported.A 3D image of a piece of concrete was obtained. The detector was a flat panel detector (VARIAN, PAXSCAN2520) with 254 {mu}m pixel size. The object and the detector were set at 50 cm and 250 cm respectively from the x-ray source, so that the magnification was 5x. The x-ray source was a 50 {mu}m Pt rod. The rotation stage and the detector were remote-controlled using a computer, which was originally created using LabView and Visual Basic software. The exposure time was about 20 minutes. The reconstruction calculation was based on the Feldkamp algorithm, and the pixel size was 50 {mu}m. We could observe sub-mm holes and density differences in the object. Thus, the 'MIRRORCLE-CV' with 1MeV electron energy, which has same x-ray generation principles, will be an excellent x-ray source for medical diagnostics and NDT.

  15. 3D patient-specific model of the tibia from CT for orthopedic use

    PubMed Central

    González-Carbonell, Raide A.; Ortiz-Prado, Armando; Jacobo-Armendáriz, Victor H.; Cisneros-Hidalgo, Yosbel A.; Alpízar-Aguirre, Armando

    2015-01-01

    Objectives 3D patient-specific model of the tibia is used to determine the torque needed to initialize the tibial torsion correction. Methods The finite elements method is used in the biomechanical modeling of tibia. The geometric model of the tibia is obtained from CT images. The tibia is modeled as an anisotropic material with non-homogeneous mechanical properties. Conclusions The maximum stress is located in the shaft of tibia diaphysis. With both meshes are obtained similar results of stresses and displacements. For this patient-specific model, the torque must be greater than 30 Nm to initialize the correction of tibial torsion deformity. PMID:25829755

  16. 3D patient-specific model of the tibia from CT for orthopedic use.

    PubMed

    González-Carbonell, Raide A; Ortiz-Prado, Armando; Jacobo-Armendáriz, Victor H; Cisneros-Hidalgo, Yosbel A; Alpízar-Aguirre, Armando

    2015-03-01

    3D patient-specific model of the tibia is used to determine the torque needed to initialize the tibial torsion correction. The finite elements method is used in the biomechanical modeling of tibia. The geometric model of the tibia is obtained from CT images. The tibia is modeled as an anisotropic material with non-homogeneous mechanical properties. The maximum stress is located in the shaft of tibia diaphysis. With both meshes are obtained similar results of stresses and displacements. For this patient-specific model, the torque must be greater than 30 Nm to initialize the correction of tibial torsion deformity.

  17. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification.

  18. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    PubMed Central

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  19. Registration uncertainties between 3D cone beam computed tomography and different reference CT datasets in lung stereotactic body radiation therapy.

    PubMed

    Oechsner, Markus; Chizzali, Barbara; Devecka, Michal; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona

    2016-10-26

    The aim of this study was to analyze differences in couch shifts (setup errors) resulting from image registration of different CT datasets with free breathing cone beam CTs (FB-CBCT). As well automatic as manual image registrations were performed and registration results were correlated to tumor characteristics. FB-CBCT image registration was performed for 49 patients with lung lesions using slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP) and mid-ventilation CTs (MidV) as reference images. Both, automatic and manual image registrations were applied. Shift differences were evaluated between the registered CT datasets for automatic and manual registration, respectively. Furthermore, differences between automatic and manual registration were analyzed for the same CT datasets. The registration results were statistically analyzed and correlated to tumor characteristics (3D tumor motion, tumor volume, superior-inferior (SI) distance, tumor environment). Median 3D shift differences over all patients were between 0.5 mm (AIPvsMIP) and 1.9 mm (MIPvsPCT and MidVvsPCT) for the automatic registration and between 1.8 mm (AIPvsPCT) and 2.8 mm (MIPvsPCT and MidVvsPCT) for the manual registration. For some patients, large shift differences (>5.0 mm) were found (maximum 10.5 mm, automatic registration). Comparing automatic vs manual registrations for the same reference CTs, ∆AIP achieved the smallest (1.1 mm) and ∆MIP the largest (1.9 mm) median 3D shift differences. The standard deviation (variability) for the 3D shift differences was also the smallest for ∆AIP (1.1 mm). Significant correlations (p < 0.01) between 3D shift difference and 3D tumor motion (AIPvsMIP, MIPvsMidV) and SI distance (AIPvsMIP) (automatic) and also for 3D tumor motion (∆PCT, ∆MidV; automatic vs manual) were found. Using different CT datasets for image registration with FB-CBCTs can result in different 3D couch shifts. Manual registrations

  20. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  1. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  2. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video.

  3. 3D Dose Verification Using Tomotherapy CT Detector Array

    SciTech Connect

    Sheng Ke; Jones, Ryan; Yang Wensha; Saraiya, Siddharth; Schneider, Bernard; Chen Quan; Sobering, Geoff; Olivera, Gustavo; Read, Paul

    2012-02-01

    Purpose: To evaluate a three-dimensional dose verification method based on the exit dose using the onboard detector of tomotherapy. Methods and Materials: The study included 347 treatment fractions from 24 patients, including 10 prostate, 5 head and neck (HN), and 9 spinal stereotactic body radiation therapy (SBRT) cases. Detector sonograms were retrieved and back-projected to calculate entrance fluence, which was then forward-projected on the CT images to calculate the verification dose, which was compared with ion chamber and film measurement in the QA plans and with the planning dose in patient plans. Results: Root mean square (RMS) errors of 2.0%, 2.2%, and 2.0% were observed comparing the dose verification (DV) and the ion chamber measured point dose in the phantom plans for HN, prostate, and spinal SBRT patients, respectively. When cumulative dose in the entire treatment is considered, for HN patients, the error of the mean dose to the planning target volume (PTV) varied from 1.47% to 5.62% with a RMS error of 3.55%. For prostate patients, the error of the mean dose to the prostate target volume varied from -5.11% to 3.29%, with a RMS error of 2.49%. The RMS error of maximum doses to the bladder and the rectum were 2.34% (-4.17% to 2.61%) and 2.64% (-4.54% to 3.94%), respectively. For the nine spinal SBRT patients, the RMS error of the minimum dose to the PTV was 2.43% (-5.39% to 2.48%). The RMS error of maximum dose to the spinal cord was 1.05% (-2.86% to 0.89%). Conclusions: An excellent agreement was observed between the measurement and the verification dose. In the patient treatments, the agreement in doses to the majority of PTVs and organs at risk is within 5% for the cumulative treatment course doses. The dosimetric error strongly depends on the error in multileaf collimator leaf opening time with a sensitivity correlating to the gantry rotation period.

  4. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  5. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  6. 3D photon counting integral imaging with unknown sensor positions.

    PubMed

    Xiao, Xiao; Javidi, Bahram

    2012-05-01

    Photon counting techniques have been introduced with integral imaging for three-dimensional (3D) imaging applications. The previous reports in this area assumed a priori knowledge of exact sensor positions for 3D image reconstruction, which may be difficult to satisfy in certain applications. In this paper, we extend the photon counting 3D imaging system to situations where sensor positions are unknown. To estimate sensor positions in photon counting integral imaging, scene details of photon counting images are needed for image correspondences matching. Therefore, an iterative method based on the total variation maximum a posteriori expectation maximization (MAP-EM) algorithm is used to restore photon counting images. Experimental results are presented to show the feasibility of the method. To the best of our knowledge, this is the first report on 3D photon counting integral imaging with unknown sensor positions. © 2012 Optical Society of America

  7. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  8. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  9. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  10. Value of 3-D CT in classifying acetabular fractures during orthopedic residency training.

    PubMed

    Garrett, Jeffrey; Halvorson, Jason; Carroll, Eben; Webb, Lawrence X

    2012-05-01

    The complex anatomy of the pelvis and acetabulum have historically made classification and interpretation of acetabular fractures difficult for orthopedic trainees. The addition of 3-dimensional (3-D) computed tomography (CT) scan has gained popularity in preoperative planning, identification, and education of acetabular fractures given their complexity. Therefore, the authors examined the value of 3-D CT compared with conventional radiography in classifying acetabular fractures at different levels of orthopedic training. Their hypothesis was that 3-D CT would improve correct identification of acetabular fractures compared with conventional radiography.The classic Letournel fracture pattern classification system was presented in quiz format to 57 orthopedic residents and 20 fellowship-trained orthopedic traumatologists. A case consisted of (1) plain radiographs and 2-dimensional axial CT scans or (2) 3-D CT scans. All levels of training showed significant improvement in classifying acetabular fractures with 3-D vs 2-D CT, with the greatest benefit from 3-D CT found in junior residents (postgraduate years 1-3).Three-dimensional CT scans can be an effective educational tool for understanding the complex spatial anatomy of the pelvis, learning acetabular fracture patterns, and correctly applying a widely accepted fracture classification system.

  11. Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji

    2017-02-01

    Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.

  12. Development of a 3D CT-scanner using a cone beam and video-fluoroscopic system.

    PubMed

    Endo, M; Yoshida, K; Kamagata, N; Satoh, K; Okazaki, T; Hattori, Y; Kobayashi, S; Jimbo, M; Kusakabe, M; Tateno, Y

    1998-01-01

    We describe the design and implementation of a system that acquires three-dimensional (3D) data of high-contrast objects such as bone, lung, and blood vessels (enhanced by contrast agent). This 3D computed tomography (CT) system is based on a cone beam and video-fluoroscopic system and yields data that is amenable to 3D image processing. An X-ray tube and a large area two-dimensional detector were mounted on a single frame and rotated around objects in 12 seconds. The large area detector consisted of a fluorescent plate and a charge coupled device (CCD) video camera. While the X-ray tube was rotated around the object, a pulsed X-ray was generated (30 pulses per second) and 360 projected images were collected in a 12-second scan. A 256 x 256 x 256 matrix image was reconstructed using a high-speed parallel processor. Reconstruction required approximately 6 minutes. Two volunteers underwent scans of the head or chest. High-contrast objects such as bronchial, vascular, and mediastinal structures in the thorax, or bones and air cavities in the head were delineated in a "real" 3D format. Our 3D CT-scanner appears to produce data useful for clinical imaging and 3D image processing.

  13. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  14. 3D Forward and Back-Projection for X-Ray CT Using Separable Footprints

    PubMed Central

    Long, Yong; Fessler, Jeffrey A.; Balter, James M.

    2010-01-01

    Iterative methods for 3D image reconstruction have the potential to improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography (CT). However, the computation burden of 3D cone-beam forward and back-projectors is one of the greatest challenges facing practical adoption of iterative methods for X-ray CT. Moreover, projector accuracy is also important for iterative methods. This paper describes two new separable footprint (SF) projector methods that approximate the voxel footprint functions as 2D separable functions. Because of the separability of these footprint functions, calculating their integrals over a detector cell is greatly simplified and can be implemented efficiently. The SF-TR projector uses trapezoid functions in the transaxial direction and rectangular functions in the axial direction, whereas the SF-TT projector uses trapezoid functions in both directions. Simulations and experiments showed that both SF projector methods are more accurate than the distance-driven (DD) projector, which is a current state-of-the-art method in the field. The SF-TT projector is more accurate than the SF-TR projector for rays associated with large cone angles. The SF-TR projector has similar computation speed with the DD projector and the SF-TT projector is about two times slower. PMID:20529732

  15. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  16. Real-time 3D dose imaging in water phantoms: reconstruction from simultaneous EPID-Cherenkov 3D imaging (EC3D)

    NASA Astrophysics Data System (ADS)

    Bruza, P.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.; Rottmann, J.; Pogue, B. W.

    2017-05-01

    Combination of electronic portal imaging device (EPID) transmission imaging with frontal Cherenkov imaging enabled real-time 3D dosimetry of clinical X-ray beams in water phantoms. The EPID provides a 2D transverse distribution of attenuation which can be back-projected to estimate accumulated dose, while the Cherenkov image provides an accurate lateral view of the dose versus depth. Assuming homogeneous density and composition of the phantom, both images can be linearly combined into a true 3D distribution of the deposited dose. We describe the algorithm for volumetric dose reconstruction, and demonstrate the results of a volumetric modulated arc therapy (VMAT) 3D dosimetry.

  17. 3D lesion insertion in digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Vaz, Michael S.; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data. To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is agnostic to the DBT reconstruction algorithm employed.

  18. Semi-automatic 3D segmentation of costal cartilage in CT data from Pectus Excavatum patients

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Queirós, Sandro; Rodrigues, Nuno; Correia-Pinto, Jorge; Vilaça, J.

    2015-03-01

    One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69+/-0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.

  19. 3D Biometrics for Hindfoot Alignment Using Weightbearing CT.

    PubMed

    Lintz, François; Welck, Matthew; Bernasconi, Alessio; Thornton, James; Cullen, Nicholas P; Singh, Dishan; Goldberg, Andy

    2017-06-01

    Hindfoot alignment on 2D radiographs can present anatomical and operator-related bias. In this study, software designed for weightbearing computed tomography (WBCT) was used to calculate a new 3D biometric tool: the Foot and Ankle Offset (FAO). We described the distribution of FAO in a series of data sets from clinically normal, varus, and valgus cases, hypothesizing that FAO values would be significantly different in the 3 groups. In this retrospective cohort study, 135 data sets (57 normal, 38 varus, 40 valgus) from WBCT (PedCAT; CurveBeam LLC, Warrington, PA) were obtained from a specialized foot and ankle unit. 3D coordinates of specific anatomical landmarks (weightbearing points of the calcaneus, of the first and fifth metatarsal heads and the highest and centermost point on the talar dome) were collected. These data were processed with the TALAS system (CurveBeam), which resulted in an FAO value for each case. Intraobserver and interobserver reliability were also assessed. In normal cases, the mean value for FAO was 2.3% ± 2.9%, whereas in varus and valgus cases, the mean was -11.6% ± 6.9% and 11.4% ± 5.7%, respectively, with a statistically significant difference among groups ( P < .001). The distribution of the normal population was Gaussian. The inter- and intraobserver reliability were 0.99 +/- 0.00 and 0.97 +/-0.02 Conclusions: This pilot study suggests that the FAO is an efficient tool for measuring hindfoot alignment using WBCT. Previously published research in this field has looked at WBCT by adapting 2D biometrics. The present study introduces the concept of 3D biometrics and describes an efficient, semiautomatic tool for measuring hindfoot alignment. Level III, retrospective comparative study.

  20. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  1. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  2. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  3. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  4. Knowledge-Based Analysis And Understanding Of 3D Medical Images

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.; Juvvadi, Sridhar

    1988-06-01

    The anatomical three-dimensional (3D) medical imaging modalities, such as X-ray CT and MRI, have been well recognized in the diagnostic radiology for several years while the nuclear medicine modalities, such as PET, have just started making a strong impact through functional imaging. Though PET images provide the functional information about the human organs, they are hard to interpret because of the lack of anatomical information. Our objective is to develop a knowledge-based biomedical image analysis system which can interpret the anatomical images (such as CT). The anatomical information thus obtained can then be used in analyzing PET images of the same patient. This will not only help in interpreting PET images but it will also provide a means of studying the correlation between the anatomical and functional imaging. This paper presents the preliminary results of the knowledge based biomedical image analysis system for interpreting CT images of the chest.

  5. Respiratory blur in 3D coronary MR imaging.

    PubMed

    Wang, Y; Grist, T M; Korosec, F R; Christy, P S; Alley, M T; Polzin, J A; Mistretta, C A

    1995-04-01

    3D MR imaging of coronary arteries has the potential to provide both high resolution and high signal-to-noise ratio, but it is very susceptible to respiratory artifacts, especially respiratory blurring. Resolution loss caused by respiratory blurring in 3D coronary imaging is analyzed theoretically and verified experimentally. Under normal respiration, the width for any Gaussian point spread function is increased to a new value that is at least several millimeters (about 3-4 mm). In vivo studies were performed to compare respiratory pseudo-gated 3D acquisition with breath-hold 2D acquisition. On average, the overall quality of a pseudo-gated 3D image is worse than that of the corresponding breath-hold 2D image (P = 0.005). In most cases, respiratory blur caused coronary arteries in pseudo-gated 3D data to have lower resolution than in breath-hold 2D data.

  6. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  7. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  8. Iliosacral screw insertion using CT-3D-fluoroscopy matching navigation.

    PubMed

    Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Yoshikawa, Hideki; Sugano, Nobuhiko

    2014-06-01

    Percutaneous iliosacral screw insertion requires substantial experience and detailed anatomical knowledge to find the proper entry point and trajectory even with the use of a navigation system. Our hypothesis was that three-dimensional (3D) fluoroscopic navigation combined with a preoperative computed tomography (CT)-based plan could enable surgeons to perform safe and reliable iliosacral screw insertion. The purpose of the current study is two-fold: (1) to demonstrate the navigation accuracy for sacral fractures and sacroiliac dislocations on widely displaced cadaveric pelves; and (2) to report the technical and clinical aspects of percutaneous iliosacral screw insertion using the CT-3D-fluoroscopy matching navigation system. We simulated three types of posterior pelvic ring disruptions with vertical displacements of 0, 1, 2 and 3cm using cadaveric pelvic rings. A total of six fiducial markers were fixed to the anterior surface of the sacrum. Target registration error over the sacrum was assessed with the fluoroscopic imaging centre on the second sacral vertebral body. Six patients with pelvic ring fractures underwent percutaneous iliosacral screw placement using the CT-3D-fluoroscopy matching navigation. Three pelvic ring fractures were classified as type B2 and three were classified as type C1 according to the AO-OTA classification. Iliosacral screws for the S1 and S2 vertebra were inserted. The mean target registration error over the sacrum was 1.2mm (0.5-1.9mm) in the experimental study. Fracture type and amount of vertical displacement did not affect the target registration error. All 12 screws were positioned correctly in the clinical series. There were no postoperative complications including nerve palsy. The mean deviation between the planned and the inserted screw position was 2.5mm at the screw entry point, 1.8mm at the area around the nerve root tunnels and 2.2mm at the tip of the screw. The CT-3D-fluoroscopy matching navigation system was accurate and

  9. Speckle Research for 3D Imaging LADAR

    DTIC Science & Technology

    2011-03-24

    computing systems. Four major research projects are (1) study of speckle patterns including metrology for small pixels on photodetector arrays. (2) Theory...radars (LADAR) as well as related basic studies of novel integrated imaging and computing systems. Four major research projects are (1) study of...the depth of field through unbalanced OPD, OSA annual meeting, Rochester NY (2008) 3. Nicholas George and Wanli Chi, Emerging integrated computational

  10. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  11. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  12. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  13. 3D nonrigid medical image registration using a new information theoretic measure

    NASA Astrophysics Data System (ADS)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  14. 3D nonrigid medical image registration using a new information theoretic measure.

    PubMed

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-21

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  15. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  16. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  17. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance.

    PubMed

    Dibildox, Gerardo; Baka, Nora; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro; van Walsum, Theo

    2014-09-01

    The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P>0.1) but did improve robustness with regards to the initialization of the 3D models. The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  18. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  19. 3D-LSI technology for image sensor

    NASA Astrophysics Data System (ADS)

    Motoyoshi, Makoto; Koyanagi, Mitsumasa

    2009-03-01

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  20. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863