Science.gov

Sample records for 3-d image diagnosis

  1. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  2. 3D imaging of telomeres and nuclear architecture: An emerging tool of 3D nano-morphology-based diagnosis.

    PubMed

    Knecht, Hans; Mai, Sabine

    2011-04-01

    Patient samples are evaluated by experienced pathologists whose diagnosis guides treating physicians. Pathological diagnoses are complex and often assisted by the application of specific tissue markers. However, cases still exist where pathologists cannot distinguish between closely related entities or determine the aggressiveness of the disease they identify under the microscope. This is due to the absence of reliable markers that define diagnostic subgroups in several cancers. Three-dimensional (3D) imaging of nuclear telomere signatures is emerging as a new tool that may change this situation offering new opportunities to the patients. This article will review current and future avenues in the assessment of diagnostic patient samples.

  3. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  4. Computer-aided diagnosis for osteoporosis using chest 3D CT images

    NASA Astrophysics Data System (ADS)

    Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2016-03-01

    The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.

  5. Added Value of 3D Proton-Density Weighted Images in Diagnosis of Intracranial Arterial Dissection

    PubMed Central

    Kim, Jin Woo; Kim, Young Dae; Lee, Seung-Koo; Lim, Soo Mee; Oh, Se Won

    2016-01-01

    Background An early and reliable diagnosis of intracranial arterial dissection is important to reduce the risk of neurological complication. The purpose of this study was to assess the clinical usefulness of three-dimensional high-resolution MRI (3D-HR-MRI) including pre- and post-contrast T1-weighted volumetric isotropic turbo spin echo acquisition with improved motion-sensitized driven equilibrium preparation (3D-iMSDE-T1) and proton-density weighted image (3D-PD) in detecting dissection and to evaluate the added value of 3D-PD in diagnosing intracranial arterial dissection. Methods We retrospectively recruited patients who underwent 3D-HR-MRI with clinical suspicion of arterial dissection. Among them, we selected patients who were diagnosed with definite dissection according to the Spontaneous Cervicocephalic Arterial Dissections Study criteria. For each patient, the presence of intimal flap, intramural hematoma, and vessel dilatation were evaluated independently by two neuroradiologists on each sequence. Interobserver agreement was assessed. Results Seventeen patients (mean age: 41 ± 10 [SD] years; 13 men) were diagnosed with definite dissection. The intimal flaps were more frequently detected on 3D-PD (88.2%, 15/17) than on 3D-iMSDE-T1 (29.4%, 5/17), and post-contrast 3D-iMSDE-T1 (35.3%, 6/17; P = 0.006 and P = 0.004, respectively). No significant difference was found in the detection rate of intramural hematomas (59–71%) and vascular dilatations (47%) on each sequence. Interobserver agreement for detection of dissection findings showed almost perfect agreement (k = 0.84–1.00), except for detection of intimal flaps on pre-contrast 3D-iMSDE-T1 (k = 0.62). After addition of 3D-PD to pre- and post-contrast 3D-iMSDE-T1, more patients were diagnosed with definite dissection with the initial MRI (88.2% vs. 47.1%; P = 0.039). Conclusions The intimal flap might be better visualized on the 3D-PD sequence than the 3D-iMSDE-T1 sequences, allowing diagnosis of

  6. Application of FLASH-3D dynamic contrast-enhanced imaging for diagnosis of endometrial carcinoma.

    PubMed

    Du, Lixin; Li, Xiaohu; Qiu, Xixiong; Liu, Xiaolei; Wang, Yuli; Yu, Yongqiang

    2016-10-01

    To investigate the application and value of fast low-angle shot three-dimensional (FLASH-3D) dynamic contrast-enhanced MRI for the pre-operative staging of endometrial carcinoma. This prospective study enrolled 48 patients with complete clinical data and pathologically confirmed endometrial carcinoma from July 2012 to March 2014. After routine MRI examination, subjects underwent FLASH-3D dynamic contrast-enhanced examination. The dynamically enhanced features of the uterine wall and tumours were analyzed. FLASH-3D pre-operative staging and findings in relation to myometrial invasion were compared with post-operative pathological results in a double-blind manner. There were 48 cases of pathologically proven endometrial carcinoma, including 34 patients with Stage I (Stage Ia 22 cases and Stage Ib 12 cases), 9 with Stage II, 3 with Stage III and 2 with Stage IV. The staging accuracy for endometrial carcinoma was 81% (39/48) using FLASH-3D dynamic contrast-enhanced sequences. The sensitivity, specificity and accuracy for the determination of deep myometrial invasion were 84%, 90% and 88%, respectively. There was no significant difference compared with the results of post-operative pathology (p > 0.05). FLASH-3D dynamic contrast-enhanced imaging may be valuable for the early diagnosis and pre-operative staging of endometrial carcinoma. Its high accuracy for assessing deep myometrial invasion makes FLASH-3D imaging an important tool for selecting the optimal therapeutic protocol and for prognosis estimation. FLASH-3D can significantly improve the accurate assessment of the depth of tumour invasion into the myometrium and may thus help to guide clinical surgical choices and post-operative evaluation. FLASH-3D is thus a promising technique for the routine examination of female pelvic tumours.

  7. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  8. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  9. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  10. Craniosynostosis: prenatal diagnosis by 2D/3D ultrasound, magnetic resonance imaging and computed tomography.

    PubMed

    Helfer, Talita Micheletti; Peixoto, Alberto Borges; Tonni, Gabriele; Araujo Júnior, Edward

    2016-09-01

    Craniosynostosis is defined as the process of premature fusion of one or more of the cranial sutures. It is a common condition that occurs in about 1 to 2,000 live births. Craniosynostosis may be classified in primary or secondary. It is also classified as nonsyndromic or syndromic. According to suture commitment, craniosynostosis may affect a single suture or multiple sutures. There is a wide range of syndromes involving craniosynostosis and the most common are Apert, Pffeifer, Crouzon, Shaethre-Chotzen and Muenke syndromes. The underlying etiology of nonsyndromic craniosynostosis is unknown. Mutations in the fibroblast growth factor (FGF) signalling pathway play a crucial role in the etiology of craniosynostosis syndromes. Prenatal ultrasound`s detection rate of craniosynostosis is low. Nowadays, different methods can be applied for prenatal diagnosis of craniosynostosis, such as two-dimensional (2D) and three-dimensional (3D) ultrasound, magnetic resonance imaging (MRI), computed tomography (CT) scan and, finally, molecular diagnosis. The presence of craniosynostosis may affect the birthing process. Fetuses with craniosynostosis also have higher rates of perinatal complications. In order to avoid the risks of untreated craniosynostosis, children are usually treated surgically soon after postnatal diagnosis.

  11. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  12. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  13. Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Liang, Zhengrong; Zhao, Hong

    2014-03-01

    Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.

  14. Grebe dysplasia - prenatal diagnosis based on rendered 3-D ultrasound images of fetal limbs.

    PubMed

    Goncalves, Luis F; Berger, Julie A; Macknis, Jacqueline K; Bauer, Samuel T; Bloom, David A

    2017-01-01

    Grebe dysplasia is a rare skeletal dysplasia characterized by severe acromesomelic shortening of the long bones in a proximal to distal gradient of severity, with bones of the hands and feet more severely affected than those of the forearms and legs, which in turn are more severely affected than the humeri and femora. In addition, the bones of the lower extremities tend to be more severely affected than the bones of the upper extremities. Despite the severe skeletal deformities, the condition is not lethal and surviving individuals can have normal intelligence. Herein we report a case of Grebe dysplasia diagnosed at 20 weeks of gestation. Rendered 3-D ultrasound images of the fetal limbs, particularly of the characteristic tiny and globular-looking fingers and toes, were instrumental in accurately characterizing the phenotype prenatally.

  15. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    NASA Astrophysics Data System (ADS)

    Ando, Masami; Bando, Hiroko; Chen, Zhihua; Chikaura, Yoshinori; Choi, Chang-Hyuk; Endo, Tokiko; Esumi, Hiroyasu; Gang, Li; Hashimoto, Eiko; Hirano, Keiichi; Hyodo, Kazuyuki; Ichihara, Shu; Jheon, SangHoon; Kim, HongTae; Kim, JongKi; Kimura, Tatsuro; Lee, ChangHyun; Maksimenko, Anton; Ohbayashi, Chiho; Park, SungHwan; Shimao, Daisuke; Sugiyama, Hiroshi; Tang, Jintian; Ueno, Ei; Yamasaki, Katsuhito; Yuasa, Tetsuya

    2007-01-01

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm × 22 mm × 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer.

  16. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    SciTech Connect

    Ando, Masami; Bando, Hiroko; Ueno, Ei

    2007-01-19

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm x 22 mm x 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer.

  17. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  18. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  19. 3D SPECT/CT fusion using image data projection of bone SPECT onto 3D volume-rendered CT images: feasibility and clinical impact in the diagnosis of bone metastasis.

    PubMed

    Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro

    2017-05-01

    We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB

  20. High speed 3D endoscopic optical frequency domain imaging probe for lung cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Jianan; Feroldi, Fabio; Mo, Jianhua; Helderman, Frank; de Groot, Mattijs; de Boer, Johannes F.

    2013-06-01

    We present a miniature motorized endoscopic probe for Optical Frequency Domain Imaging with an outer diameter of 1.65 mm and a rotation speed of 3,000 - 12,500 rpm. The probe has a motorized distal end which provides a significant advantage over proximally driven probes since it does not require a drive shaft to transfer the rotational torque to the distal end of the probe and functions without a fiber rotary junction. The probe has a focal Full Width at Half Maximum of 9.6 μm and a working distance of 0.47 mm. We analyzed the non-uniform rotation distortion and found a location fluctuation of only 1.87° in repeated measurements of the same object. The probe was integrated in a high-speed Optical Frequency Domain Imaging setup at 1310 nm. We demonstrated its performance with imaging ex vivo pig bronchial and in vivo goat lung.

  1. 3-D threat image projection

    NASA Astrophysics Data System (ADS)

    Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

    2008-02-01

    Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat

  2. Do medical images aid understanding and recall of medical information? An experimental study comparing the experience of viewing no image, a 2D medical image and a 3D medical image alongside a diagnosis.

    PubMed

    Phelps, Emma Elizabeth; Wellings, Richard; Griffiths, Frances; Hutchinson, Charles; Kunar, Melina

    2017-06-01

    This study compared the experience of viewing 3D medical images, 2D medical images and no image presented alongside a diagnosis. We conducted two laboratory experiments, each with 126 healthy participants. Participants heard three diagnoses; one accompanied by 3D medical images, one accompanied by 2D medical images and one with no image. Participants completed a questionnaire after each diagnosis rating their experience. In Experiment 2, half of the participants were informed that image interpretation can be susceptible to errors. Participants preferred to view 3D images alongside a diagnosis (p<0.001) and reported greater understanding (p<0.001), perceived accuracy (p<0.001) and increased trust (p<0.001) when the diagnosis was accompanied by an image compared to no image. There was no significant difference in trust between participants who were informed of errors within image interpretation and those who were not. When presented alongside a diagnosis, medical images may aid patient understanding, recall and trust in medical information. Medical images may be a powerful resource for patients that could be utilised by clinicians during consultations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  4. Diagnosis of intracranial hemorrhagic lesions: comparison between 3D-SWAN (3D T2*-weighted imaging with multi-echo acquisition) and 2D-T2*-weighted imaging.

    PubMed

    Hayashida, Yoshiko; Kakeda, Shingo; Hiai, Yasuhiro; Ide, Satoshi; Ogasawara, Atsushi; Ooki, Hodaka; Watanabe, Keita; Nishimura, Joji; Ohnari, Norihiro; Korogi, Yukunori

    2014-03-01

    3D-susceptibility-weighted angiography (SWAN) can produce high-resolution images that yield excellent susceptibility-weighted contrast at a relatively short acquisition time. To compare SWAN- and 2D-T2*-weighted gradient-echo images (T2*-WI) for their sensitivity in the depiction of cerebral hemorrhagic lesions. We subjected 75 patients with suspected cerebral hemorrhagic lesions to SWAN and T2*-WI at 3T. We first measured the contrast-to-noise ratio (CNR) using an agar phantom that contained different concentrations of superparamagnetic iron oxide (SPIO). The acquisition time for SWAN and T2*-WI was similar (182 vs. 196 s). Neuroradiologists compared the two imaging methods for lesion detectability and conspicuity. The CNR of the phantom was higher on SWAN images. Of the 75 patients, 50 were found to have a total of 278 cerebral hemorrhagic lesions (microbleeds, n = 229 [82.4%]; intracerebral hemorrhage, n = 18 [6.5%]; superficial siderosis, n = 13 [4.7%]; axonal injuries, n = 8 [2.9%]; subarachnoid hemorrhage [SAH] or brain contusion, n = 3 each [1.0%]; subdural hematoma, n = 2 [0.7%]; cavernous hemangioma or dural arterteriovenous fistula, n = 1 each [0.4%]). In none of the lesions was the SWAN sequence inferior to T2*-WI with respect to lesion detectability and conspicuity. In fact, SWAN yielded better lesion conspicuity in patients with superficial siderosis and SAH: it detected significantly more lesions than T2*-WI (P < 0.01) and it was particularly useful for the detection of microbleeds and lesions near the skull base. SWAN is equal or superior to standard T2*-WI for the diagnosis of various cerebral hemorrhagic lesions. Because its acquisition time is reasonable it may replace T2*-WI.

  5. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  6. Prenatal diagnosis of a patent urachus cyst with the use of 2D, 3D, 4D ultrasound and fetal magnetic resonance imaging.

    PubMed

    Fuchs, F; Picone, O; Levaillant, J M; Mabille, M; Mas, A E; Frydman, R; Senat, M V

    2008-01-01

    Patent urachus cyst is a rare umbilical anomaly, which is poorly detected prenatally and frequently confounded with pseudo bladder exstrophy or omphalocele. A 27-year-old woman was referred to our prenatal diagnosis centre at 18 weeks of gestation after diagnosis of a megabladder and 2 umbilical cord cysts. Subsequent 2D, 3D and 4D ultrasound examinations and fetal magnetic resonance imaging (MRI) revealed a typical umbilical cyst and an extra-abdominal cyst, communicating with the vertex of the fetal bladder through a small channel that increased in size when the fetus voided urine. Termination of pregnancy occured at 31 weeks because of associated cerebral septal agenesis, and autopsy confirmed the prenatal diagnosis of urachus cyst. Few cases of urachus cyst diagnosed prenatally are reported in literature, but none were associated with other extra-abdominal disorders and none used 3D, 4D and fetal MRI. Our case illustrated the efficiency in prenatal diagnosis of 3D and 4D ultrasound examinations. This could help pediatrician surgeons to explain to a couple about neonatal surgical repair and plastic reconstruction in the prenatal period.

  7. Computer-aided diagnosis: a 3D segmentation method for lung nodules in CT images by use of a spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Jiahui; Engelmann, Roger; Li, Qiang

    2008-03-01

    Lung nodule segmentation in computed tomography (CT) plays an important role in computer-aided detection, diagnosis, and quantification systems for lung cancer. In this study, we developed a simple but accurate nodule segmentation method in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. We then transformed the VOI into a two-dimensional (2D) image by use of a "spiral-scanning" technique, in which a radial line originating from the center of the VOI spirally scanned the VOI. The voxels scanned by the radial line were arranged sequentially to form a transformed 2D image. Because the surface of a nodule in 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified our segmentation method and enabled us to obtain accurate segmentation results. We employed a dynamic programming technique to delineate the "optimal" outline of a nodule in the 2D image, which was transformed back into the 3D image space to provide the interior of the nodule. The proposed segmentation method was trained on the first and was tested on the second Lung Image Database Consortium (LIDC) datasets. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric. The experimental results on the LIDC database demonstrated that our segmentation method provided relatively robust and accurate segmentation results with mean overlap values of 66% and 64% for the nodules in the first and second LIDC datasets, respectively, and would be useful for the quantification, detection, and diagnosis of lung cancer.

  8. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  9. Computer Assisted Cancer Device - 3D Imaging

    DTIC Science & Technology

    2006-10-01

    tomosynthesis images of the breast. iCAD has identified several sources of 3D tomosynthesis data, and has begun adapting its image analysis...collaborative relationships with major manufacturers of tomosynthesis equipment. 21. iCAD believes that tomosynthesis , a 3D breast imaging technique...purported advantages of tomosynthesis relative to conventional mammography include; improved lesion visibility, improved lesion detectability and

  10. 3D ultrasound imaging in image-guided intervention.

    PubMed

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  11. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  12. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  13. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  14. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  15. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  16. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    SciTech Connect

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each

  17. Ultrafast 3D imaging by holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro

    2017-02-01

    As an ultrafast 3D imaging technique, an improved light-in-flight recording by holography using a femtosecond is presented. To record 3D image of light propagation, a voluminous light-scattering medium is introduced to the light-inflight recording by holography. A mode-locked Ti:Sapphire laser are employed for the optical source. To generate the 3D image of propagating light, a voluminous light-scattering medium is made of gelatin jelly and set in the optical path of the object wave of holography. 3D motion picture of propagation of a femtosecond light pulse was achieved for 260ps with 220fs temporal resolution. Digital recording of 3D image of light propagation is also presented. To record the 3D image of the light propagation, digital holography is combined with the light-in-flight recording by holography using a voluminous light-scattering medium. The hologram is recorded with an image sensor such as CCD image sensor. The image of the light is reconstructed from the digitally recorded hologram by computer. To obtain the motion picture of the 3D image of the light propagation, a set of pieces of holograms consisting of 512 × 512 pixels are extracted from the whole area of the digitally recorded hologram. The position of the extracted piece on the recoded hologram is shifted along the direction in which the reference optical pulse swept on the image sensor, piece-by-piece of the hologram. The set of the pieces are reconstructed sequentially, then the 3D digital motion picture of propagation of femtosecond light pulse is achieved. The recordable time of the motion picture was 60 ps.

  18. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  19. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  20. Midbrain segmentation in transcranial 3D ultrasound for Parkinson diagnosis.

    PubMed

    Ahmadi, Seyed-Ahmad; Baust, Maximilian; Karamalis, Athanasios; Plate, Annika; Boetzel, Kai; Klein, Tassilo; Navab, Nassir

    2011-01-01

    Ultrasound examination of the human brain through the temporal bone window, also called transcranial ultrasound (TC-US), is a completely non-invasive and cost-efficient technique, which has established itself for differential diagnosis of Parkinson's Disease (PD) in the past decade. The method requires spatial analysis of ultrasound hyperechogenicities produced by pathological changes within the Substantia Nigra (SN), which belongs to the basal ganglia within the midbrain. Related work on computer aided PD diagnosis shows the urgent need for an accurate and robust segmentation of the midbrain from 3D TC-US, which is an extremely difficult task due to poor image quality of TC-US. In contrast to 2D segmentations within earlier approaches, we develop the first method for semi-automatic midbrain segmentation from 3D TC-US and demonstrate its potential benefit on a database of 11 diagnosed Parkinson patients and 11 healthy controls.

  1. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  2. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  3. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  4. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  5. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  6. 3-D imaging of the CNS.

    PubMed

    Runge, V M; Gelblum, D Y; Wood, M L

    1990-01-01

    3-D gradient echo techniques, and in particular FLASH, represent a significant advance in MR imaging strategy allowing thin section, high resolution imaging through a large region of interest. Anatomical areas of application include the brain, spine, and extremities, although the majority of work to date has been performed in the brain. Superior T1 contrast and thus sensitivity to the presence of GdDTPA is achieved with 3-D FLASH when compared to 2-D spin echo technique. There is marked arterial and venous enhancement following Gd DTPA administration on 3-D FLASH, a less common finding with 2-D spin echo. Enhancement of the falx and tentorium is also more prominent. From a single data acquisition, requiring less than 11 min of scan time, high resolution reformatted sagittal, coronal, and axial images can obtained in addition to sections in any arbitrary plane. Tissue segmentation techniques can be applied and lesions displayed in three dimensions. These results may lead to the replacement of 2-D spin echo with 3-D FLASH for high resolution T1-weighted MR imaging of the CNS, particularly in the study of mass lesions and structural anomalies. The application of similar T2-weighted gradient echo techniques may follow, however the signal-to-noise ratio which can be achieved remains a potential limitation.

  7. 3-D Image of Vesta Eastern Hemisphere

    NASA Image and Video Library

    2012-01-23

    This anaglyph shows the topography of Vesta eastern hemisphere; equatorial troughs are visible around asteroid Vesta equator and north of these troughs there are a number of highly degraded, old, large craters. You need 3-D glasses to view this image.

  8. 3D reconstruction, visualization, and measurement of MRI images

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

    1999-03-01

    This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

  9. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  10. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  11. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  12. Feasibility of 3D harmonic contrast imaging.

    PubMed

    Voormolen, M M; Bouakaz, A; Krenning, B J; Lancée, C T; ten Cate, F J; de Jong, N

    2004-04-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suitable for contrast imaging. In this study the feasibility of 3D harmonic contrast imaging is evaluated in vitro. A commercially available tissue mimicking flow phantom was used in combination with Sonovue. Backscatter power spectra from a tissue and contrast region of interest were calculated from recorded radio frequency data. The spectra and the extracted contrast to tissue ratio from these spectra were used to optimize the excitation frequency, the pulse length and the receive filter settings of the transducer. Frequencies ranging from 1.66 to 2.35 MHz and pulse lengths of 1.5, 2 and 2.5 cycles were explored. An increase of more than 15 dB in the contrast to tissue ratio was found around the second harmonic compared with the fundamental level at an optimal excitation frequency of 1.74 MHz and a pulse length of 2.5 cycles. Using the optimal settings for 3D harmonic contrast recordings volume measurements of a left ventricular shaped agar phantom were performed. Without contrast the extracted volume data resulted in a volume error of 1.5%, with contrast an accuracy of 3.8% was achieved. The results show the feasibility of accurate volume measurements from 3D harmonic contrast images. Further investigations will include the clinical evaluation of the presented technique for improved assessment of the heart.

  13. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  14. 3D quantitative analysis of brain SPECT images

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Ceskovic, Ivan; Petrovic, Ratimir; Loncaric, Srecko

    2001-07-01

    The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.

  15. Signal subspace registration of 3D images

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad

    1998-06-01

    This paper addresses the problem of fusing the information content of two uncalibrated sensors. This problem arises in registering images of a scene when it is viewed via two different sensory systems, or detecting change in a scene when it is viewed at two different time points by a sensory system (or via two different sensory systems or observation channels). We are concerned with sensory systems which have not only a relative shift, scaling and rotational calibration error, but also an unknown point spread function (that is time-varying for a single sensor, or different for two sensors). By modeling one image in terms of an unknown linear combination of the other image, its powers and their spatially-transformed (shift, rotation and scaling) versions, a signal subspace processing is developed for fusing uncalibrated sensors. Numerical results with realistic 3D magnetic resonance images of a patient with multiple sclerosis, which are acquired at two different time points, are provided.

  16. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  17. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  18. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  19. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  20. Antenatal Diagnosis of a Large Immature Abdominal Wall Teratoma by 2D-3D Ultrasound Using HDlive and Magnetic Resonance Imaging.

    PubMed

    Werner, Heron; Mocarzel, Carolina; Sá, Renato Augusto; Tonni, Gabriele; Novoa Y Novoa, Victoria Arruga; Avvad-Portari, Elyzabeth; Bonasoni, Paola; Araujo Júnior, Edward

    2016-01-01

    We describe the first case of prenatally detected teratoma of the fetal abdomen wall using ultrasound and fetal magnetic resonance imaging (MRI). A heterogeneous mass, partly solid and cystic, originating from the anterior abdominal wall of the fetus close to an omphalocele sac was detected by means of 2D/3D ultrasound and MRI. Amniodrainage was performed and due to sign of impending fetal risk, an emergency Cesarean section was performed. A bulky, crumbly and bleeding tumoral mass was confirmed at delivery. Ligation of the supplying artery to the tumor was complicated by uncontrollable hemorrhage and early neonatal death. Pathology identified the tumor as an immature teratoma of the anterior fetal abdominal wall. 2D/3D ultrasound, especially using HDlive application and MRI demonstrated accurate detection and characterization of this congenital tumor.

  1. Novel 3D stereoscopic imaging technology

    NASA Astrophysics Data System (ADS)

    Faris, Sadeg M.

    1994-04-01

    Numerous 3-D stereoscopic techniques have been explored. These previous techniques have had shortcomings precluding them from making stereoscopic imaging pervasive in mainstream applications. In the last decade, several enabling technologies have emerged and have become available and affordable. They make it possible now to realize the near-ideal stereoscopic imaging technology that can be made available to the masses making possible the inevitable transition from flat imaging to stereoscopic imaging. The ideal stereoscopic technology must meet four important criteria: (1) high stereoscopic image quality; (2) affordability; (3) compatibility with existing infrastructure, e.g., NTSC video, PC, and other devices; and (4) general purpose characteristics, e.g., the ability to produce electronic displays, hard-copy printing and capturing stereoscopic images on film and stored electronically. In section 2, an overview of prior art technologies is given highlighting their advantages and disadvantages. In section 3, the novel (mu) PolTM stereoscopic technology is described making the case that it meets the four criteria for realizing the inevitable transition from flat to stereoscopic imaging for mass applications.

  2. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  3. Novel Approaches in 3D Sensing, Imaging, and Visualization

    NASA Astrophysics Data System (ADS)

    Schulein, Robert; Daneshpanah, M.; Cho, M.; Javidi, B.

    Three-dimensional (3D) imaging systems are being researched extensively for purposes of sensing and visualization in fields as diverse as defense, medical, art, and entertainment. When compared to traditional 2D imaging techniques, 3D imaging offers advantages in ranging, robustness to scene occlusion, and target recognition performance. Amongst the myriad 3D imaging techniques, 3D multiperspective imaging technologies have received recent attention due to the technologies' relatively low cost, scalability, and passive sensing capabilities. Multiperspective 3D imagers collect 3D scene information by recording 2D intensity information from multiple perspectives, thus retaining both ray intensity and angle information. Three novel developments in 3D sensing, imaging, and visualization systems are presented: 3D imaging with axially distributed sensing, 3D optical profilometry, and occluded 3D object tracking.

  4. Image of OCT denoising and 3D reconstructing method

    NASA Astrophysics Data System (ADS)

    Yan, Xue-tao; Yang, Jun; Liu, Zhi-hai; Yuan, Li-bo

    2007-11-01

    Optical coherence tomography (OCT), which is a novel tomography method, is non-contact, noninvasive image of the vivo tomograms, and have characteristic of high resolution and high speed; therefore it becomes an important direction of biomedicine imaging. However, when the OCT system used in specimen, noise and distortion will appear, because the speed of the system is confined, therefore image needs the reconstruction. The article studies OCT 3-D reconstruction method. It cotains denoising, recovering and segmenting, these image preprocessing technology are necessary. This paper studies the high scattering medium, such as specimen of skin, using photons transmiting properties, researches the denoising and recovering algorithm with optical photons model of propagation in biological tissu to remove the speckle of skin image and 3-D reconstrut. It proposes a dynamic average background estimation algorithm based on time-domain estimation method. This method combines the estimation in time-domain with the filter in frequency-domain to remove the noises of image effectively. In addition, it constructs a noise-model for recovering image to avoid longitudinal direction distortion and deep's amplitude distortion and image blurring. By compareing and discussing, this method improves and optimizes algorithms to improve the quality of image. The article optimizes iterative reconstruction algorithm by improving convergent speed, and realizes OCT specimen data's 3-D reconstruction. It opened the door for further analysis and diagnosis of diseases.

  5. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  6. [3D imaging benefits in clinical pratice of orthodontics].

    PubMed

    Frèrejouand, Emmanuel

    2016-12-01

    3D imaging possibilities raised up in the last few years in the orthodontic field. In 2016, it can be used for diagnosis improvement and treatment planning by using digital set up combined to CBCT. It is relevant for orthodontic mechanic updating by creating visible or invisible customised appliances. It forms the basis of numerous scientific researches. The author explains the progress 3D imaging brings to diagnosis and clinics but also highlights the requirements it creates. The daily use of these processes in orthodontic clinical practices needs to be regulated regarding the benefit/risk ratio and the patient satisfaction. The command of the digital work flow created by these technics requires habits modifications from the orthodontist and his staff. © EDP Sciences, SFODF, 2016.

  7. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  8. 3-D Ultrasound Vascularity Assessment for Breast Cancer Diagnosis

    DTIC Science & Technology

    1998-09-01

    Doppler color-echo in the echo- using NPDM or vM • NPDM in conjunction with mean graphic evaluation of solid neoplasms of the breast : 5 years of grey-scale...UNCLASSIFIED AD CONTRACT NUMBER DAMDl7-96-C-6061 TITLE: 3-D Ultrasound Vascularity Assessment for Breast Cancer Diagnosis PRINCIPAL INVESTIGATOR: Paul L. Carson...97 - 31 Aug 98) 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS 3-D Ultrasound Vascularity Assessment for Breast Cancer DAMDl7-96-C-6061 Diagnosis 6. AUTHOR

  9. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  10. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  11. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  12. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  13. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  14. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  15. 3D Gabor wavelet based vessel filtering of photoacoustic images.

    PubMed

    Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi

    2016-08-01

    Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.

  16. Augmented reality 3D display based on integral imaging

    NASA Astrophysics Data System (ADS)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  17. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  18. 3D seismic imaging, example of 3D area in the middle of Banat

    NASA Astrophysics Data System (ADS)

    Antic, S.

    2009-04-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get

  19. 3D freehand ultrasound for medical assistance in diagnosis and treatment of breast cancer: preliminary results

    NASA Astrophysics Data System (ADS)

    Torres, Fabian; Fanti, Zian; Arambula Cosío, F.

    2013-11-01

    Image-guided interventions allow the physician to have a better planning and visualization of a procedure. 3D freehand ultrasound is a non-invasive and low-cost imaging tool that can be used to assist medical procedures. This tool can be used in the diagnosis and treatment of breast cancer. There are common medical practices that involve large needles to obtain an accurate diagnosis and treatment of breast cancer. In this study we propose the use of 3D freehand ultrasound for planning and guiding such procedures as core needle biopsy and radiofrequency ablation. The proposed system will help the physician to identify the lesion area, using image-processing techniques in the 3D freehand ultrasound images, and guide the needle to this area using the information of position and orientation of the surgical tools. We think that this system can upgrade the accuracy and efficiency of these procedures.

  20. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  1. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  2. Research of range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Yang, Haitao; Zhao, Hongli; Youchen, Fan

    2016-10-01

    Laser image data-based target recognition technology is one of the key technologies of laser active imaging systems. This paper discussed the status quo of 3-D imaging development at home and abroad, analyzed the current technological bottlenecks, and built a prototype of range-gated systems to obtain a set of range-gated slice images, and then constructed the 3-D images of the target by binary method and centroid method, respectively, and by constructing different numbers of slice images explored the relationship between the number of images and the reconstruction accuracy in the 3-D image reconstruction process. The experiment analyzed the impact of two algorithms, binary method and centroid method, on the results of 3-D image reconstruction. In the binary method, a comparative analysis was made on the impact of different threshold values on the results of reconstruction, where 0.1, 0.2, 0.3 and adaptive threshold values were selected for 3-D reconstruction of the slice images. In the centroid method, 15, 10, 6, 3, and 2 images were respectively used to realize 3-D reconstruction. Experimental results showed that with the same number of slice images, the accuracy of centroid method was higher than the binary algorithm, and the binary algorithm had a large dependence on the selection of threshold; with the number of slice images dwindling, the accuracy of images reconstructed by centroid method continued to reduce, and at least three slice images were required in order to obtain one 3-D image.

  3. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  4. 3D Reconstruction from a Single Image

    DTIC Science & Technology

    2008-08-01

    ITS APPLICATIONS UNIVERSITY OF MINNESOTA 400 Lind Hall 207 Church Street S.E. Minneapolis, Minnesota 55455–0436 Phone: 612-624-6066 Fax: 612-626-7370...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Minnesota ,Institute for Mathematics and Its Applications,Minneapolis,MN,55455-0436 8...accurately learn 3D priors using a single camera and the Radon transform. While we could certainly use this method in the work here presented (the

  5. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  6. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  7. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    PubMed Central

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  8. From medical imaging data to 3D printed anatomical models.

    PubMed

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  9. From medical imaging data to 3D printed anatomical models

    PubMed Central

    Hill, Emma R.; Robertson, James L.; Maneas, Efthymios; Plumb, Andrew A.; Nikitichev, Daniil I.

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer. PMID:28562693

  10. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  11. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  12. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  13. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  14. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  15. 3D photon counting integral imaging with unknown sensor positions.

    PubMed

    Xiao, Xiao; Javidi, Bahram

    2012-05-01

    Photon counting techniques have been introduced with integral imaging for three-dimensional (3D) imaging applications. The previous reports in this area assumed a priori knowledge of exact sensor positions for 3D image reconstruction, which may be difficult to satisfy in certain applications. In this paper, we extend the photon counting 3D imaging system to situations where sensor positions are unknown. To estimate sensor positions in photon counting integral imaging, scene details of photon counting images are needed for image correspondences matching. Therefore, an iterative method based on the total variation maximum a posteriori expectation maximization (MAP-EM) algorithm is used to restore photon counting images. Experimental results are presented to show the feasibility of the method. To the best of our knowledge, this is the first report on 3D photon counting integral imaging with unknown sensor positions. © 2012 Optical Society of America

  16. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  17. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  18. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  19. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  20. Real-time 3D dose imaging in water phantoms: reconstruction from simultaneous EPID-Cherenkov 3D imaging (EC3D)

    NASA Astrophysics Data System (ADS)

    Bruza, P.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.; Rottmann, J.; Pogue, B. W.

    2017-05-01

    Combination of electronic portal imaging device (EPID) transmission imaging with frontal Cherenkov imaging enabled real-time 3D dosimetry of clinical X-ray beams in water phantoms. The EPID provides a 2D transverse distribution of attenuation which can be back-projected to estimate accumulated dose, while the Cherenkov image provides an accurate lateral view of the dose versus depth. Assuming homogeneous density and composition of the phantom, both images can be linearly combined into a true 3D distribution of the deposited dose. We describe the algorithm for volumetric dose reconstruction, and demonstrate the results of a volumetric modulated arc therapy (VMAT) 3D dosimetry.

  1. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  2. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  3. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  4. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  5. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  6. Respiratory blur in 3D coronary MR imaging.

    PubMed

    Wang, Y; Grist, T M; Korosec, F R; Christy, P S; Alley, M T; Polzin, J A; Mistretta, C A

    1995-04-01

    3D MR imaging of coronary arteries has the potential to provide both high resolution and high signal-to-noise ratio, but it is very susceptible to respiratory artifacts, especially respiratory blurring. Resolution loss caused by respiratory blurring in 3D coronary imaging is analyzed theoretically and verified experimentally. Under normal respiration, the width for any Gaussian point spread function is increased to a new value that is at least several millimeters (about 3-4 mm). In vivo studies were performed to compare respiratory pseudo-gated 3D acquisition with breath-hold 2D acquisition. On average, the overall quality of a pseudo-gated 3D image is worse than that of the corresponding breath-hold 2D image (P = 0.005). In most cases, respiratory blur caused coronary arteries in pseudo-gated 3D data to have lower resolution than in breath-hold 2D data.

  7. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  8. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  9. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  10. Speckle Research for 3D Imaging LADAR

    DTIC Science & Technology

    2011-03-24

    computing systems. Four major research projects are (1) study of speckle patterns including metrology for small pixels on photodetector arrays. (2) Theory...radars (LADAR) as well as related basic studies of novel integrated imaging and computing systems. Four major research projects are (1) study of...the depth of field through unbalanced OPD, OSA annual meeting, Rochester NY (2008) 3. Nicholas George and Wanli Chi, Emerging integrated computational

  11. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  12. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  13. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  14. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  15. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  16. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance.

    PubMed

    Dibildox, Gerardo; Baka, Nora; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro; van Walsum, Theo

    2014-09-01

    The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P>0.1) but did improve robustness with regards to the initialization of the 3D models. The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  17. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  18. 3D-LSI technology for image sensor

    NASA Astrophysics Data System (ADS)

    Motoyoshi, Makoto; Koyanagi, Mitsumasa

    2009-03-01

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  19. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  20. Morphometrics, 3D Imaging, and Craniofacial Development

    PubMed Central

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  1. Dynamic 3D ultrasound and MR image registration of the beating heart.

    PubMed

    Huang, Xishi; Hill, Nicholas A; Ren, Jing; Guiraudon, Gerard; Boughner, Derek; Peters, Terry M

    2005-01-01

    Real-time three-dimensional ultrasound (RT3D US) is an ideal imaging modality for the diagnosis of cardiac disease. RT3D US is a flexible, inexpensive, non-invasive tool that provides important diagnostic information related to cardiac function. Unfortunately, RT3D US suffers from inherent shortcomings, such as low signal-to-noise ratio and limited field of view, producing images that are difficult to interpret. Multi-modal dynamic cardiac image registration is a well-recognized approach that compensates for these deficiencies while retaining the advantages of RT3D US imaging. The clinical application of multi-modal image registration methods is difficult, and there are a number of implementation issues to be resolved. In this work, we present a method for the rapid registration of RT3D US images of the beating heart to high-resolution magnetic resonance (MR) images. This method was validated using a volunteer image set. Validation results demonstrate that this approach can achieve rapid registration of images of the beating heart with fiducial landmark and registration errors of 1.25 +/- 0.63 and 1.76 mm respectively. This technique can potentially be used to improve the diagnosis of cardiac disease by augmenting RT3D US images with high-resolution MR images and to facilitate intra-operative image fusion for minimally invasive cardio-thoracic surgical navigation.

  2. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  3. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  4. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  5. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  6. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  7. Compression of M-FISH images using 3D SPIHT

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xiong, Zixiang; Castleman, Kenneth R.

    2001-12-01

    With the recent development of the use of digital media for cytogenetic imaging applications, efficient compression techniques are highly desirable to accommodate the rapid growth of image data. This paper introduces a lossy to lossless coding technique for compression of multiplex fluorescence in situ hybridization (M-FISH) images, based on 3-D set partitioning in hierarchical trees (3-D SPIHT). Using a lifting-based integer wavelet decomposition, the 3-D SPIHT achieves both embedded coding and substantial improvement in lossless compression over the Lempel-Ziv (WinZip) coding which is the current method for archiving M-FISH images. The lossy compression performance of the 3-D SPIHT is also significantly better than that of the 2-D based JPEG-2000.

  8. 3D Whole Heart Imaging for Congenital Heart Disease

    PubMed Central

    Greil, Gerald; Tandon, Animesh (Aashoo); Silva Vieira, Miguel; Hussain, Tarique

    2017-01-01

    Three-dimensional (3D) whole heart techniques form a cornerstone in cardiovascular magnetic resonance imaging of congenital heart disease (CHD). It offers significant advantages over other CHD imaging modalities and techniques: no ionizing radiation; ability to be run free-breathing; ECG-gated dual-phase imaging for accurate measurements and tissue properties estimation; and higher signal-to-noise ratio and isotropic voxel resolution for multiplanar reformatting assessment. However, there are limitations, such as potentially long acquisition times with image quality degradation. Recent advances in and current applications of 3D whole heart imaging in CHD are detailed, as well as future directions. PMID:28289674

  9. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  10. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  11. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  12. Digital holography and 3D imaging: introduction to feature issue.

    PubMed

    Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph

    2013-01-01

    This feature issue of Applied Optics on Digital Holography and 3D Imaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others.

  13. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  14. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  15. Lossless Compression of Medical Images Using 3D Predictors.

    PubMed

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio

    2017-06-09

    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  16. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  17. Progresses in 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Navarro, Héctor; Pons, Amparo; Javidi, Bahram

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  18. 3D Subharmonic Ultrasound Imaging In Vitro and In Vivo

    PubMed Central

    Eisenbrey, John R.; Sridharan, Anush; Machado, Priscilla; Zhao, Hongjia; Halldorsdottir, Valgerdur G.; Dave, Jaydev K.; Liu, Ji-Bin; Park, Suhyun; Dianis, Scott; Wallace, Kirk; Thomenius, Kai E.; Forsberg, F.

    2012-01-01

    Rationale and Objectives While contrast-enhanced ultrasound imaging techniques such as harmonic imaging (HI) have evolved to reduce tissue signals using the nonlinear properties of the contrast agent, levels of background suppression have been mixed. Subharmonic imaging (SHI) offers near-complete tissue suppression by centering the receive bandwidth at half the transmitting frequency. In this work we demonstrate the feasibility of 3D SHI and compare it to 3D HI. Materials and Methods 3D HI and SHI were implemented on a Logiq 9 ultrasound scanner (GE Healthcare, Milwaukee, Wisconsin) with a 4D10L probe. Four-cycle SHI was implemented to transmit at 5.8 MHz and receive at 2.9 MHz, while 2-cycle HI was implemented to transmit at 5 MHz and receive at 10 MHz. The ultrasound contrast agent Definity (Lantheus Medical Imaging, North Billerica, MA) was imaged within a flow phantom and the lower pole of two canine kidneys in both HI and SHI modes. Contrast to tissue ratios (CTR) and rendered images were compared offline. Results SHI resulted in significant improvement in CTR levels relative to HI both in vitro (12.11±0.52 vs. 2.67±0.77, p<0.001) and in vivo (5.74±1.92 vs. 2.40±0.48, p=0.04). Rendered 3D SHI images provided better tissue suppression and a greater overall view of vessels in a flow phantom and canine renal vasculature. Conclusions The successful implementation of SHI in 3D allows imaging of vascular networks over a heterogeneous sample volume and should improve future diagnostic accuracy. Additionally, 3D SHI provides improved CTR values relative to 3D HI. PMID:22464198

  19. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  20. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  1. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  2. Wave-CAIPI for highly accelerated 3D imaging.

    PubMed

    Bilgic, Berkin; Gagoski, Borjan A; Cauley, Stephen F; Fan, Audrey P; Polimeni, Jonathan R; Grant, P Ellen; Wald, Lawrence L; Setsompop, Kawin

    2015-06-01

    To introduce the wave-CAIPI (controlled aliasing in parallel imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. The wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line while modifying the 3D phase encoding strategy to incur interslice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high-quality image reconstruction with wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and quantitative susceptibility mapping. Wave-CAIPI enables full-brain gradient echo acquisition at 1 mm isotropic voxel size and R = 3 × 3 acceleration with maximum g-factors of 1.08 at 3T and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies (2D-CAIPI and bunched phase encoding) wave-CAIPI yields up to two-fold reduction in maximum g-factor for nine-fold acceleration at both field strengths. Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. © 2014 Wiley Periodicals, Inc.

  3. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  4. Exposing digital image forgeries by 3D reconstruction technology

    NASA Astrophysics Data System (ADS)

    Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

    2009-11-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

  5. 3-D diagnosis-assisted management of anomalous mandibular molar

    PubMed Central

    Mittal, Neelam; Narang, Isha

    2012-01-01

    This case report describes the successful non-surgical endodontic management of carious exposed three-rooted mandibular molar with four root canals detected on the pre-operative radiograph taken with 20 degrees mesial angulation and confirmed with a 64-slice helical computed tomography scan-assisted 3-D-reconstructed images. Access cavity shape was modified to locate the extra canal with respect to the distolingual root in the left mandibular first molar. Copious irrigation was accomplished with 5.25% sodium hypochlorite and 17% EDTA. Biomechanical preparation was done using protapers. Calcium hydroxide dressing was done for 1 week. The tooth was obturated using gutta percha and AH 26 root canal sealer, and it was permanently restored with composite. Clinical examination on follow-up visits revealed no sensitivity to percussion and palpation in the left mandibular first molar. Thorough knowledge of root canal variations and use of advanced diagnostic modalities lead to successful non-surgical management of the complex cases. PMID:22629067

  6. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  7. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  8. 3D Image Reconstruction: Determination of Pattern Orientation

    SciTech Connect

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  9. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  10. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  11. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  12. Chemistry of wood in 3D: new infrared imaging

    Treesearch

    Barbara L. Illman; Julia Sedlmair; Miriam Unger; Casey Crooks; Marli Oliveira; Carol Hirschmugl

    2015-01-01

    Chemical detection, mapping and imaging in three dimensions will help refine our understanding of wood properties and durability. We describe here a pioneering infrared method to create visual 3D images of the chemicals in wood, providing for the first time, spatial and architectural information at the cellular level without liquid extraction or prior fixation....

  13. 3D frequency-domain ultrasound waveform tomography breast imaging

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan Yash; West, Erik; Li, Cuiping; Roy, Olivier; Duric, Neb

    2017-03-01

    Frequency-domain ultrasound waveform tomography is a promising method for the visualization and characterization of breast disease. It has previously been shown to accurately reconstruct the sound speed distributions of breasts of varying densities. The reconstructed images show detailed morphological and quantitative information that can help differentiate different types of breast disease including benign and malignant lesions. The attenuation properties of an ex vivo phantom have also been assessed. However, the reconstruction algorithms assumed a 2D geometry while the actual data acquisition process was not. Although clinically useful sound speed images can be reconstructed assuming this mismatched geometry, artifacts from the reconstruction process exist within the reconstructed images. This is especially true for registration across different modalities and when the 2D assumption is violated. For example, this happens when a patient's breast is rapidly sloping. It is also true for attenuation imaging where energy lost or gained out of the plane gets transformed into artifacts within the image space. In this paper, we will briefly review ultrasound waveform tomography techniques, give motivation for pursuing the 3D method, discuss the 3D reconstruction algorithm, present the results of 3D forward modeling, show the mismatch that is induced by the violation of 3D modeling via numerical simulations, and present a 3D inversion of a numerical phantom.

  14. Gastric Contraction Imaging System Using a 3-D Endoscope.

    PubMed

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of [Formula: see text] of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  15. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  16. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  17. 2D/3D Image Registration using Regression Learning.

    PubMed

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-09-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.

  18. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  19. Clinical Application of 3D-FIESTA Image in Patients with Unilateral Inner Ear Symptom.

    PubMed

    Oh, Jae Ho; Chung, Jae Ho; Min, Hyun Jung; Cho, Seok Hyun; Park, Chul Won; Lee, Seung Hwan

    2013-12-01

    Unilateral auditory dysfunction such as tinnitus and hearing loss could be a warning sign of a retrocochlear lesion. Auditory brainstem response (ABR) and internal auditory canal magnetic resonance image (MRI) are suggested as novel diagnostic tools for retrocochlear lesions. However, the high cost of MRI and the low sensitivity of the ABR test could be an obstacle when assessing patients with unilateral ear symptoms. The purpose of this study was to introduce the clinical usefulness of three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) MRI in patients with unilateral ear symptoms. Two hundred and fifty-three patients with unilateral tinnitus or unilateral hearing loss who underwent 3D-FIESTA temporal bone MRI as a screening test were enrolled. We reviewed the abnormal findings in the 3D-FIESTA images and ear symptoms using the medical records. In patients with unilateral ear symptoms, 51.0% of the patients had tinnitus and 32.8% patients were assessed to have sudden sensory neural hearing loss. With 3D-FIESTA imaging, twelve patients were diagnosed with acoustic neuroma, four with enlarged vestibular aqueduct syndrome, and two with posterior inferior cerebellar artery aneurysm. Inner ear anomalies and vestibulocochlear nerve aplasia could be diagnosed with 3D-FIESTA imaging. 3D-FIESTA imaging is a highly sensitive method for the diagnosis of cochlear or retrocochlear lesions. 3D-FIESTA imaging is a useful screening tool for patients with unilateral ear symptoms.

  20. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).

  1. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  2. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  3. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  4. Episcopic 3D Imaging Methods: Tools for Researching Gene Function

    PubMed Central

    Weninger, Wolfgang J; Geyer, Stefan H

    2008-01-01

    This work aims at describing episcopic 3D imaging methods and at discussing how these methods can contribute to researching the genetic mechanisms driving embryogenesis and tissue remodelling, and the genesis of pathologies. Several episcopic 3D imaging methods exist. The most advanced are capable of generating high-resolution volume data (voxel sizes from 0.5x0.5x1 µm upwards) of small to large embryos of model organisms and tissue samples. Beside anatomy and tissue architecture, gene expression and gene product patterns can be three dimensionally analyzed in their precise anatomical and histological context with the aid of whole mount in situ hybridization or whole mount immunohistochemical staining techniques. Episcopic 3D imaging techniques were and are employed for analyzing the precise morphological phenotype of experimentally malformed, randomly produced, or genetically engineered embryos of biomedical model organisms. It has been shown that episcopic 3D imaging also fits for describing the spatial distribution of genes and gene products during embryogenesis, and that it can be used for analyzing tissue samples of adult model animals and humans. The latter offers the possibility to use episcopic 3D imaging techniques for researching the causality and treatment of pathologies or for staging cancer. Such applications, however, are not yet routine and currently only preliminary results are available. We conclude that, although episcopic 3D imaging is in its very beginnings, it represents an upcoming methodology, which in short terms will become an indispensable tool for researching the genetic regulation of embryo development as well as the genesis of malformations and diseases. PMID:19452045

  5. Disocclusion of 3d LIDAR Point Clouds Using Range Images

    NASA Astrophysics Data System (ADS)

    Biasutti, P.; Aujol, J.-F.; Brédif, M.; Bugeau, A.

    2017-05-01

    This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor's topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  6. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  7. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  8. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  9. Photogrammetric 3d Building Reconstruction from Thermal Images

    NASA Astrophysics Data System (ADS)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  10. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  11. Reconstruction of pediatric 3D blood vessel images from biplane angiograms

    NASA Astrophysics Data System (ADS)

    Oishi, Satoru; Nishiki, Masayuki; Asahina, Hiroshi; Tanabe, Chiharu; Yasunaga, Kunihiro; Nakamura, Hiroharu

    1996-04-01

    In pediatric cardiac angiography, there are several peculiarities such as limitation of both x-ray dose and the amount of contrast medium in comparison with conventional angiography. Due to these peculiarities, the catheter examinations are accomplished in a short time with biplane x- ray apparatus. Thus, it is often difficult to determine 3D structures of blood vessels, especially those of pediatric anomalies. Then a new 3D reconstruction method based on selective biplane angiography was developed in order to support diagnosis and surgical planning. The method was composed of particular reconstruction and composition. Individual 3D image is reconstructed with the particular reconstruction, and all 3D images are composed into standard coordinate system in the composition. This method was applied to phantom images and clinical images for evaluation of the method. The 3D image of the clinical data was reconstructed accurately as its structures were compared with the real structures described in the operative findings. The 3D visualization based on the method is helpful for diagnosis and surgical planning of complicated anomalies in pediatric cardiology.

  12. 3D EFT imaging with planar electrode array: Numerical simulation

    NASA Astrophysics Data System (ADS)

    Tuykin, T.; Korjenevsky, A.

    2010-04-01

    Electric field tomography (EFT) is the new modality of the quasistatic electromagnetic sounding of conductive media recently investigated theoretically and realized experimentally. The demonstrated results pertain to 2D imaging with circular or linear arrays of electrodes (and the linear array provides quite poor quality of imaging). In many applications 3D imaging is essential or can increase value of the investigation significantly. In this report we present the first results of numerical simulation of the EFT imaging system with planar array of electrodes which allows 3D visualization of the subsurface conductivity distribution. The geometry of the system is similar to the geometry of our EIT breast imaging system providing 3D conductivity imaging in form of cross-sections set with different depth from the surface. The EFT principle of operation and reconstruction approach differs from the EIT system significantly. So the results of numerical simulation are important to estimate if comparable quality of imaging is possible with the new contactless method. The EFT forward problem is solved using finite difference time domain (FDTD) method for the 8×8 square electrodes array. The calculated results of measurements are used then to reconstruct conductivity distributions by the filtered backprojections along electric field lines. The reconstructed images of the simple test objects are presented.

  13. [Pathophysiological diagnosis of facial paralysis using 3-D MRI].

    PubMed

    Ishihara, T; Hirata, K; Yuki, N; Sato, T

    2001-04-01

    Bilateral facial paralysis(facial diplesia) is often observed in Guillain-Barré syndrome(GBS) and Fisher's syndrome (FS). We tried to observe injured facial nerves using three-dimensional(3-D) MRI in facial diplesia due to GBS and its variants and examined function of blood nerve barrier and clinical use of 3-D MRI for detecting injured facial nerves. In the four patients with GBS and its variants(GBS three cases, FS one case), while routine brain MRI did not show any abnormal findings, contrast-enhanced 3-D MRI revealed Gd-enhancement of the facial nerves. On the other hand, only one case showed visualization using contrast-enhanced 3-D MRI in twelve cases of Bell's palsy. Therefore, it may be presumed that the reason why the significantly higher rate of visualization in facial paralysis in GBS and its variants than in Bell's palsy is attributable to a difference in the mechanism of injury or the extreme seriousness of the disease. In conclusion, the observation of facial nerve using 3-D MRI was very useful to know the condition of the facial diplesia in GBS and its variants.

  14. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  15. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  16. A new 3-D diagnosis strategy for duodenal malignant lesions using multidetector row CT, CT virtual duodenoscopy, duodenography, and 3-D multicholangiography.

    PubMed

    Sata, N; Endo, K; Shimura, K; Koizumi, M; Nagai, H

    2007-01-01

    Recent advances in multidetector row computed tomography (MD-CT) technology provide new opportunities for clinical diagnoses of various diseases. Here we assessed CT virtual duodenoscopy, duodenography, and three-dimensional (3D) multicholangiography created by MD-CT for clinical diagnosis of duodenal malignant lesions. The study involved seven cases of periduodenal carcinoma (four ampullary carcinomas, two duodenal carcinomas, one pancreatic carcinoma). Biliary contrast medium was administered intravenously, followed by intravenous administration of an anticholinergic agent and oral administration of effervescent granules for expanding the upper gastrointestinal tract. Following intravenous administration of a nonionic contrast medium, an upper abdominal MD-CT scan was performed in the left lateral position. Scan data were processed on a workstation to create CT virtual duodenoscopy, duodenography, 3D multicholangiography, and various postprocessing images, which were then evaluated for their effectiveness as preoperative diagnostic tools. Carcinoma location and extent were clearly demonstrated as defects or colored low-density areas in 3-D multicholangiography images and as protruding lesions in virtual duodenography and duodenoscopy images. These findings were confirmed using multiplanar or curved planar reformation images. In conclusion, CT virtual duodenoscopy, doudenography, 3-D multicholangiography, and various images created by MD-CT alone provided necessary and adequate preoperative diagnostic information.

  17. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  18. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  19. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  20. 3D cryo-section/imaging of blood vessel lesions for validation of MRI data

    NASA Astrophysics Data System (ADS)

    Salvado, Olivier; Roy, Debashish; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    Vascular disease is a leading cause of death and disability in the western world. Diagnosis and staging of atherosclerosis is a challenge, especially with regards to the identification of plaque vulnerability. We are developing imaging methods based upon MRI and intravascular microcoils. In order to rigorously validate our MRI imaging methods and algorithms, we have developed a new cryo-imaging system that allows one to alternately section and image the block face of tissue. We obtain 3D pathology of vessel segments excised from cadaver and we characterize the tissues of atheroma using episcopic autofluorescence and bright field microscopy images. After embedding the vessel, the block is frozen, and block face microscopic images are taken every 200μm with an image resolution of 30μm×30μm. The series of images is then corrected for uneven illumination, serially registered to one another, and the 3D vessel segment is reconstructed. Some sections are recovered and processed with histological staining for validation. Seven tissue types can be readily identified from the cryo-images: necrotic core, calcification, lipid pool, media, adventitia, fibrosis, thrombus, and normal intima. Since the whole vessel segment is available, we could register 3D data to images from MR, or other modalities, for validation. In addition, visualization tools such as multi-planar reformatting 3D rendering can be used to study 3D plaque morphology, in microscopic detail.

  1. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  2. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  3. Semi-implicit finite volume scheme for image processing in 3D cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Mikula, Karol; Sgallari, Fiorella

    2003-12-01

    Nowadays, 3D echocardiography is a well-known technique in medical diagnosis. Inexpensive echocardiographic acquisition devices are applied to scan 2D slices rotated along a prescribed direction. Then the discrete 3D image information is given on a cylindrical grid. Usually, this original discrete image intensity function is interpolated to a uniform rectangular grid and then numerical schemes for 3D image processing operations (e.g. nonlinear smoothing) in the uniform rectangular geometry are used. However, due to the generally large amount of noise present in echocardiographic images, the interpolation step can yield undesirable results. In this paper, we avoid this step and suggest a 3D finite volume method for image selective smoothing directly in the cylindrical image geometry. Specifically, we study a semi-implicit 3D cylindrical finite volume scheme for solving a Perona-Malik-type nonlinear diffusion equation and apply the scheme to 3D cylindrical echocardiographic images. The L∞-stability and convergence of the scheme to the weak solution of the regularized Perona-Malik equation is proved.

  4. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  5. 3D Image Reconstruction: Hamiltonian Method for Phase Recovery

    SciTech Connect

    Blankenbecler, Richard

    2003-03-13

    The problem of reconstructing a positive semi-definite 3-D image from the measurement of the magnitude of its 2-D fourier transform at a series of orientations is explored. The phase of the fourier transform is not measured. The algorithm developed here utilizes a Hamiltonian, or cost function, that at its minimum provides the solution to the stated problem. The energy function includes both data and physical constraints on the charge distribution or image.

  6. Noninvasive computational imaging of cardiac electrophysiology for 3-D infarct.

    PubMed

    Wang, Linwei; Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2011-04-01

    Myocardial infarction (MI) creates electrophysiologically altered substrates that are responsible for ventricular arrhythmias, such as tachycardia and fibrillation. The presence, size, location, and composition of infarct scar bear significant prognostic and therapeutic implications for individual subjects. We have developed a statistical physiological model-constrained framework that uses noninvasive body-surface-potential data and tomographic images to estimate subject-specific transmembrane-potential (TMP) dynamics inside the 3-D myocardium. In this paper, we adapt this framework for the purpose of noninvasive imaging, detection, and quantification of 3-D scar mass for postMI patients: the framework requires no prior knowledge of MI and converges to final subject-specific TMP estimates after several passes of estimation with intermediate feedback; based on the primary features of the estimated spatiotemporal TMP dynamics, we provide 3-D imaging of scar tissue and quantitative evaluation of scar location and extent. Phantom experiments were performed on a computational model of realistic heart-torso geometry, considering 87 transmural infarct scars of different sizes and locations inside the myocardium, and 12 compact infarct scars (extent between 10% and 30%) at different transmural depths. Real-data experiments were carried out on BSP and magnetic resonance imaging (MRI) data from four postMI patients, validated by gold standards and existing results. This framework shows unique advantage of noninvasive, quantitative, computational imaging of subject-specific TMP dynamics and infarct mass of the 3-D myocardium, with the potential to reflect details in the spatial structure and tissue composition/heterogeneity of 3-D infarct scar.

  7. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  8. 3D Imaging of Density Gradients Using Plenoptic BOS

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna; Clifford, Chris; Fahringer, Timothy; Thurow, Brian

    2016-11-01

    The combination of background oriented schlieren (BOS) and a plenoptic camera, termed Plenoptic BOS, is explored through two proof-of-concept experiments. The motivation of this work is to provide a 3D technique capable of observing density disturbances. BOS uses the relationship between density and refractive index gradients to observe an apparent shift in a patterned background through image comparison. Conventional BOS systems acquire a single line-of-sight measurement, and require complex configurations to obtain 3D measurements, which are not always conducive to experimental facilities. Plenoptic BOS exploits the plenoptic camera's ability to generate multiple perspective views and refocused images from a single raw plenoptic image during post processing. Using such capabilities, with regards to BOS, provides multiple line-of-sight measurements of density disturbances, which can be collectively used to generate refocused BOS images. Such refocused images allow the position of density disturbances to be qualitatively and quantitatively determined. The image that provides the sharpest density gradient signature corresponds to a specific depth. These results offer motivation to advance Plenoptic BOS with an ultimate goal of reconstructing a 3D density field.

  9. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    NASA Astrophysics Data System (ADS)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  10. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  11. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    PubMed

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  12. 3D ultrasound image segmentation using wavelet support vector machines

    PubMed Central

    Akbari, Hamed; Fei, Baowei

    2012-01-01

    Purpose: Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. Methods: This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. Results: The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. Conclusions: The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate. PMID:22755682

  13. 3-D segmentation of human sternum in lung MDCT images.

    PubMed

    Pazokifard, Banafsheh; Sowmya, Arcot

    2013-01-01

    A fully automatic novel algorithm is presented for accurate 3-D segmentation of the human sternum in lung multi detector computed tomography (MDCT) images. The segmentation result is refined by employing active contours to remove calcified costal cartilage that is attached to the sternum. For each dataset, costal notches (sternocostal joints) are localized in 3-D by using a sternum mask and positions of the costal notches on it as reference. The proposed algorithm for sternum segmentation was tested on 16 complete lung MDCT datasets and comparison of the segmentation results to the reference delineation provided by a radiologist, shows high sensitivity (92.49%) and specificity (99.51%) and small mean distance (dmean=1.07 mm). Total average of the Euclidean distance error for costal notches positioning in 3-D is 4.2 mm.

  14. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  15. 3D image registration using a fast noniterative algorithm.

    PubMed

    Zhilkin, P; Alexander, M E

    2000-11-01

    This note describes the implementation of a three-dimensional (3D) registration algorithm, generalizing a previous 2D version [Alexander, Int J Imaging Systems and Technology 1999;10:242-57]. The algorithm solves an integrated form of linearized image matching equation over a set of 3D rectangular sub-volumes ('patches') in the image domain. This integrated form avoids numerical instabilities due to differentiation of a noisy image over a lattice, and in addition renders the algorithm robustness to noise. Registration is implemented by first convolving the unregistered images with a set of computationally fast [O(N)] filters, providing four bandpass images for each input image, and integrating the image matching equation over the given patch. Each filter and each patch together provide an independent set of constraints on the displacement field derived by solving a set of linear regression equations. Furthermore, the filters are implemented at a variety of spatial scales, enabling registration parameters at one scale to be used as an input approximation for deriving refined values of those parameters at a finer scale of resolution. This hierarchical procedure is necessary to avoid false matches occurring. Both downsampled and oversampled (undecimating) filtering is implemented. Although the former is computationally fast, it lacks the translation invariance of the latter. Oversampling is required for accurate interpolation that is used in intermediate stages of the algorithm to reconstruct the partially registered from the unregistered image. However, downsampling is useful, and computationally efficient, for preliminary stages of registration when large mismatches are present. The 3D registration algorithm was implemented using a 12-parameter affine model for the displacement: u(x) = Ax + b. Linear interpolation was used throughout. Accuracy and timing results for registering various multislice images, obtained by scanning a melon and human volunteers in various

  16. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    NASA Astrophysics Data System (ADS)

    Seniutinas, Gediminas; Balčytis, Armandas; Reklaitis, Ignas; Chen, Feng; Davis, Jeffrey; David, Christian; Juodkazis, Saulius

    2017-06-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1-100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics) within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  17. 3-D Ultrasound Vascularity Assessment for Breast Cancer Diagnosis

    DTIC Science & Technology

    1999-09-01

    34truth" data in classifying a given case as benign or malignant. Discrimination thresholds are determined using a linear Baysian classifier for the two...somewhat intuitive. The diagonal line represents a Baysian discrimination line, equally weighing the cost of false-positive and false-negative...threshold is lowered to produce a sensitivity of 100%. Using a 3D linear Baysian discrimination scheme (which produces a threshold plane), the average

  18. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  19. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  20. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  1. 3D imaging of the mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

    A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

  2. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  3. 3D DC/IP BOREHOLE-TO-BOREHOLE IMAGING

    NASA Astrophysics Data System (ADS)

    Milkereit, B.; Qian, W.; Bongajum, E. L.

    2009-12-01

    Our goal is the development of robust 3D DC/IP imaging technology for rock mass characterization. This work focuses on the use of multi-electrode array surface and borehole electric methods to build 3D conductivity and chargeability earth models. Over the past 3 years, we carried out field projects to evaluate the use of cross-borehole electrical methods for imaging subsurface conductive zones and to quantify chargeability effects. Several single borehole vertical resistivity profiles (VRP), borehole-to-borehole, and borehole-to-surface resistivity tomography (BRT) survey tests have been successfully conducted. The multichannel borehole DC/IP resistivity data acquisition system consists of multiple borehole cables, each with 24 electrodes which may act as either source or receiver. When a constant injection voltage is applied between electrodes, the boreholes need to be water filled so as the electrode array couples to the rock formation. The borehole cable design allows a seamless integration of borehole and surface measurements with or without simultaneous readings from surface electrodes. The system has the capacity to acquire more than 1000 full waveform resistance and chargeability readings per hour. We established a multi-step procedure for data acquisition, processing and interpretation. For the borehole-to-borehole application, we have successfully mapped conductive zones between boreholes up to 350m apart. Using at least two boreholes helps to constrain the direction (azimuth) of the imaged conductive zones. Borehole resistivity tomography test surveys were conducted to map three-dimensional massive sulfide zones between boreholes in the Sudbury area. Both surface and in-mine borehole acquisition geometries were tested. The 3D conductivity model for massive sulfides was derived from a four-borehole acquisition geometry. We continue to utilize the 3D IP (induced polarization) information in the inversion process and develop new 3D tomographic inversion

  4. Machine learning-based 3-D geometry reconstruction and modeling of aortic valve deformation using 3-D computed tomography images.

    PubMed

    Liang, Liang; Kong, Fanwei; Martin, Caitlin; Pham, Thuy; Wang, Qian; Duncan, James; Sun, Wei

    2017-05-01

    To conduct a patient-specific computational modeling of the aortic valve, 3-D aortic valve anatomic geometries of an individual patient need to be reconstructed from clinical 3-D cardiac images. Currently, most of computational studies involve manual heart valve geometry reconstruction and manual finite element (FE) model generation, which is both time-consuming and prone to human errors. A seamless computational modeling framework, which can automate this process based on machine learning algorithms, is desirable, as it can not only eliminate human errors and ensure the consistency of the modeling results but also allow fast feedback to clinicians and permits a future population-based probabilistic analysis of large patient cohorts. In this study, we developed a novel computational modeling method to automatically reconstruct the 3-D geometries of the aortic valve from computed tomographic images. The reconstructed valve geometries have built-in mesh correspondence, which bridges harmonically for the consequent FE modeling. The proposed method was evaluated by comparing the reconstructed geometries from 10 patients with those manually created by human experts, and a mean discrepancy of 0.69 mm was obtained. Based on these reconstructed geometries, FE models of valve leaflets were developed, and aortic valve closure from end systole to middiastole was simulated for 7 patients and validated by comparing the deformed geometries with those manually created by human experts, and a mean discrepancy of 1.57 mm was obtained. The proposed method offers great potential to streamline the computational modeling process and enables the development of a preoperative planning system for aortic valve disease diagnosis and treatment. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  6. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  7. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  8. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  9. Sound-speed image reconstruction in sparse-aperture 3-D ultrasound transmission tomography.

    PubMed

    Jirík, Radovan; Peterlík, Igor; Ruiter, Nicole; Fousek, Jan; Dapp, Robin; Zapf, Michael; Jan, Jirí

    2012-02-01

    The paper is focused on sound-speed image reconstruction in 3-D ultrasound transmission tomography. Along with ultrasound reflectivity and the attenuation coefficient, sound speed is an important parameter which is related to the type and pathological state of the imaged tissue. This is important in the intended application, breast cancer diagnosis. In contrast to 2-D ultrasound transmission tomography systems, a 3-D system can provide an isotropic spatial resolution in the x-, y-, and z-directions in reconstructed 3-D images of ultrasound parameters. Several challenges must, however, be addressed for 3-D systems-namely, a sparse transducer distribution, low signal-to-noise ratio, and higher computational complexity. These issues are addressed in terms of sound-speed image reconstruction, using edge-preserving regularized algebraic reconstruction in combination with synthetic aperture focusing. The critical points of the implementation are also discussed, because they are crucial to enable a complete 3-D image reconstruction. The methods were tested on a synthetic data set and on data sets measured with the Karlsruhe 3-D ultrasound computer tomography (USCT) I prototype using phantoms. The sound-speed estimates in the reconstructed volumes agreed with the reference values. The breast-phantom outlines and the lesion-mimicking objects were also detectable in the resulting sound-speed volumes.

  10. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  11. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  12. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  13. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  14. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  16. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  17. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  18. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  19. Automated Identification of Fiducial Points on 3D Torso Images

    PubMed Central

    Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2013-01-01

    Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

  20. Target penetration of laser-based 3D imaging systems

    NASA Astrophysics Data System (ADS)

    Cheok, Geraldine S.; Saidi, Kamel S.; Franaszek, Marek

    2009-01-01

    The ASTM E57.02 Test Methods Subcommittee is developing a test method to evaluate the ranging performance of a 3D imaging system. The test method will involve either measuring the distance between two targets or between an instrument and a target. The first option is necessary because some instruments cannot be centered over a point and will require registration of the instrument coordinate frame into the target coordinate frame. The disadvantage of this option is that registration will introduce an additional error into the measurements. The advantage of this option is that this type of measurement, relative measurement, is what is typically used in field applications. A potential target geometry suggested for the test method is a planar target. The ideal target material would be diffuse, have uniform reflectivity for wavelengths between 500 nm to 1600 nm (wavelengths of most commercially-available 3D imaging systems), and have minimal or no penetration of the laser into the material. A possible candidate material for the target is Spectralon1. However, several users have found that there is some penetration into the Spectralon by a laser and this is confirmed by the material manufacturer. The effect of this penetration on the range measurement is unknown. This paper will present an attempt to quantify the laser penetration depth into the Spectralon material for four 3D imaging systems.

  1. Joint calibration of 3D resist image and CDSEM

    NASA Astrophysics Data System (ADS)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  2. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  3. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  4. Automated spatial alignment of 3D torso images.

    PubMed

    Bose, Arijit; Shah, Shishir K; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2011-01-01

    This paper describes an algorithm for automated spatial alignment of three-dimensional (3D) surface images in order to achieve a pre-defined orientation. Surface images of the torso are acquired from breast cancer patients undergoing reconstructive surgery to facilitate objective evaluation of breast morphology pre-operatively (for treatment planning) and/or post-operatively (for outcome assessment). Based on the viewing angle of the multiple cameras used for stereophotography, the orientation of the acquired torso in the images may vary from the normal upright position. Consequently, when translating this data into a standard 3D framework for visualization and analysis, the co-ordinate geometry differs from the upright position making robust and standardized comparison of images impractical. Moreover, manual manipulation and navigation of images to the desired upright position is subject to user bias. Automating the process of alignment and orientation removes operator bias and permits robust and repeatable adjustment of surface images to a pre-defined or desired spatial geometry.

  5. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  6. Modelling of image-catheter motion for 3-D IVUS.

    PubMed

    Rosales, Misael; Radeva, Petia; Rodriguez-Leor, Oriol; Gil, Debora

    2009-02-01

    Three-dimensional intravascular ultrasound (IVUS) allows to visualize and obtain volumetric measurements of coronary lesions through an exploration of the cross sections and longitudinal views of arteries. However, the visualization and subsequent morpho-geometric measurements in IVUS longitudinal cuts are subject to distortion caused by periodic image/vessel motion around the IVUS catheter. Usually, to overcome the image motion artifact ECG-gating and image-gated approaches are proposed, leading to slowing the pullback acquisition or disregarding part of IVUS data. In this paper, we argue that the image motion is due to 3-D vessel geometry as well as cardiac dynamics, and propose a dynamic model based on the tracking of an elliptical vessel approximation to recover the rigid transformation and align IVUS images without loosing any IVUS data. We report an extensive validation with synthetic simulated data and in vivo IVUS sequences of 30 patients achieving an average reduction of the image artifact of 97% in synthetic data and 79% in real-data. Our study shows that IVUS alignment improves longitudinal analysis of the IVUS data and is a necessary step towards accurate reconstruction and volumetric measurements of 3-D IVUS.

  7. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  8. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.

  9. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  10. Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis

    NASA Astrophysics Data System (ADS)

    Mah, J.; Claire, S.; Steve, M.

    2009-05-01

    Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint

  11. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  12. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  13. 3D CARS image reconstruction and pattern recognition on SHG images

    NASA Astrophysics Data System (ADS)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  14. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization

  15. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  16. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  17. Quantification of thyroid volume using 3-D ultrasound imaging.

    PubMed

    Kollorz, E K; Hahn, D A; Linke, R; Goecke, T W; Hornegger, J; Kuwert, T

    2008-04-01

    Ultrasound (US) is among the most popular diagnostic techniques today. It is non-invasive, fast, comparably cheap, and does not require ionizing radiation. US is commonly used to examine the size, and structure of the thyroid gland. In clinical routine, thyroid imaging is usually performed by means of 2-D US. Conventional approaches for measuring the volume of the thyroid gland or its nodules may therefore be inaccurate due to the lack of 3-D information. This work reports a semi-automatic segmentation approach for the classification, and analysis of the thyroid gland based on 3-D US data. The images are scanned in 3-D, pre-processed, and segmented. Several pre-processing methods, and an extension of a commonly used geodesic active contour level set formulation are discussed in detail. The results obtained by this approach are compared to manual interactive segmentations by a medical expert in five representative patients. Our work proposes a novel framework for the volumetric quantification of thyroid gland lobes, which may also be expanded to other parenchymatous organs.

  18. 3D imaging of biological specimen using MS.

    PubMed

    Fletcher, John S

    2015-01-01

    Imaging MS can provide unique information about the distribution of native and non-native compounds in biological specimen. MALDI MS and secondary ion MS are the two most commonly applied imaging MS techniques and can provide complementary information about a sample. MALDI offers access to high mass species such as proteins while secondary ion MS can operate at higher spatial resolution and provide information about lower mass species including elemental signals. Imaging MS is not limited to two dimensions and different approaches have been developed that allow 3D molecular images to be generated of chemicals in whole organs down to single cells. Resolution in the z-dimension is often higher than in x and y, so such analysis offers the potential for probing the distribution of drug molecules and studying drug action by MS with a much higher precision - possibly even organelle level.

  19. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  20. 3D super-resolution imaging with blinking quantum dots

    PubMed Central

    Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R.

    2013-01-01

    Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots, and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (FWHM) of 8–17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3–7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439

  1. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  2. Registration of real-time 3-D ultrasound images of the heart for novel 3-D stress echocardiography.

    PubMed

    Shekhar, Raj; Zagrodsky, Vladimir; Garcia, Mario J; Thomas, James D

    2004-09-01

    Stress echocardiography is a routinely used clinical procedure to diagnose cardiac dysfunction by comparing wall motion information in prestress and poststress ultrasound images. Incomplete data, complicated imaging protocols and misaligned prestress and poststress views, however, are known limitations of conventional stress echocardiography. We discuss how the first two limitations are overcome via the use of real-time three-dimensional (3-D) ultrasound imaging, an emerging modality, and have called the new procedure "3-D stress echocardiography." We also show that the problem of misaligned views can be solved by registration of prestress and poststress 3-D image sequences. Such images are misaligned because of variations in placing the ultrasound transducer and stress-induced anatomical changes. We have developed a technique to temporally align 3-D images of the two sequences first and then to spatially register them to rectify probe placement error while preserving the stress-induced changes. The 3-D spatial registration is mutual information-based. Image registration used in conjunction with 3-D stress echocardiography can potentially improve the diagnostic accuracy of stress testing.

  3. Diagnosis and control of 3D elastic mechanical structures

    NASA Astrophysics Data System (ADS)

    Krajcin, Idriz; Soeffker, Dirk

    2005-05-01

    In this paper, a model-based approach for fault detection and vibration control of flexible structures is proposed and applied to 3D-structures. Faults like cracks or impacts acting on a flexible structure are considered as unknown inputs acting on the structure. The Proportional-Integral-Observer (PI-Observer) is used to estimate the system states as well as unknown inputs acting on a system. Also the effects of structural changes are understood as external effects (related to the unchanged structure) and are considered as fictitious external forces or moments. The paper deals with the design of the PI-Observer for practical applications when measurement noise and model uncertainties are present and shows its performance in experimental results. As examples, impacts acting upon a one side clamped elastic beam and on a thin plate structure are estimated using displacement or strain measurements. To control the vibration of the flexible plate, two piezoelectric patches bonded on the structure are used as actuators. The control algorithm introduced in this contribution contains a state feedback control and additionally a disturbance rejection. The disturbances are estimated using the PI-Observer. Experimental results show the performance and the robustness properties of the control strategy for the vibration control of a very thin plate.

  4. Clinical Application of 3D-FIESTA Image in Patients with Unilateral Inner Ear Symptom

    PubMed Central

    Oh, Jae Ho; Chung, Jae Ho; Min, Hyun Jung; Cho, Seok Hyun; Park, Chul Won

    2013-01-01

    Background and Objectives Unilateral auditory dysfunction such as tinnitus and hearing loss could be a warning sign of a retrocochlear lesion. Auditory brainstem response (ABR) and internal auditory canal magnetic resonance image (MRI) are suggested as novel diagnostic tools for retrocochlear lesions. However, the high cost of MRI and the low sensitivity of the ABR test could be an obstacle when assessing patients with unilateral ear symptoms. The purpose of this study was to introduce the clinical usefulness of three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) MRI in patients with unilateral ear symptoms. Subjects and Methods Two hundred and fifty-three patients with unilateral tinnitus or unilateral hearing loss who underwent 3D-FIESTA temporal bone MRI as a screening test were enrolled. We reviewed the abnormal findings in the 3D-FIESTA images and ear symptoms using the medical records. Results In patients with unilateral ear symptoms, 51.0% of the patients had tinnitus and 32.8% patients were assessed to have sudden sensory neural hearing loss. With 3D-FIESTA imaging, twelve patients were diagnosed with acoustic neuroma, four with enlarged vestibular aqueduct syndrome, and two with posterior inferior cerebellar artery aneurysm. Inner ear anomalies and vestibulocochlear nerve aplasia could be diagnosed with 3D-FIESTA imaging. Conclusions 3D-FIESTA imaging is a highly sensitive method for the diagnosis of cochlear or retrocochlear lesions. 3D-FIESTA imaging is a useful screening tool for patients with unilateral ear symptoms. PMID:24653918

  5. 3D endoscopic imaging using structured illumination technique (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.

    2017-02-01

    Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments

  6. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  7. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm(2). Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  8. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  9. Imaging PVC gas pipes using 3-D GPR

    SciTech Connect

    Bradford, J.; Ramaswamy, M.; Peddy, C.

    1996-11-01

    Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

  10. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  11. 3D printed microfluidic mixer for point-of-care diagnosis of anemia.

    PubMed

    Plevniak, Kimberly; Campbell, Matthew; Mei He

    2016-08-01

    3D printing has been an emerging fabrication tool in prototyping and manufacturing. We demonstrated a 3D microfluidic simulation guided computer design and 3D printer prototyping for quick turnaround development of microfluidic 3D mixers, which allows fast self-mixing of reagents with blood through capillary force. Combined with smartphone, the point-of-care diagnosis of anemia from finger-prick blood has been successfully implemented and showed consistent results with clinical measurements. Capable of 3D fabrication flexibility and smartphone compatibility, this work presents a novel diagnostic strategy for advancing personalized medicine and mobile healthcare.

  12. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  13. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  14. DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation.

    PubMed

    Zeng, Tao; Wu, Bian; Ji, Shuiwang

    2017-08-15

    Progress in 3D electron microscopy (EM) imaging has greatly facilitated neuroscience research in high-throughput data acquisition. Correspondingly, high-throughput automated image analysis methods are necessary to work on par with the speed of data being produced. One such example is the need for automated EM image segmentation for neurite reconstruction. However, the efficiency and reliability of current methods are still lagging far behind human performance. Here, we propose DeepEM3D, a deep learning method for segmenting 3D anisotropic brain electron microscopy images. In this method, the deep learning model can efficiently build feature representation and incorporate sufficient multi-scale contextual information. We propose employing a combination of novel boundary map generation methods with optimized model ensembles to address the inherent challenges of segmenting anisotropic images. We evaluated our method by participating in the 3D segmentation of neurites in EM images (SNEMI3D) challenge. Our submission is ranked #1 on the current leaderboard as of Oct 15, 2016. More importantly, our result was very close to human-level performance in terms of the challenge evaluation metric: namely, a Rand error of 0.06015 versus the human value of 0.05998. The code is available at https://github.com/divelab/deepem3d/. sji@eecs.wsu.edu. Supplementary data are available at Bioinformatics online.

  15. 3-D Ultrasound Vascularity Assessment for Breast Cancer Diagnosis

    DTIC Science & Technology

    2000-09-01

    Andiona denin n eation im-posed, on 0y. Additional registration accuracies using phantom data from computed tomography, magnetic res- tion is nearly...The renal circulation. In: 21. Shung KK. In vitro experiment results on 299-302. Hamilton WF, Dow P, eds. Handbook of ultrasonic scattering in...ultrasonic image feature localization based on magnetic scanhead tracking: In vitro calibration and validation, Ultrasound Med. and Bio. 20:923

  16. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  17. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  18. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields.

  19. 3-D imaging and illustration of mouse intestinal neurovascular complex.

    PubMed

    Fu, Ya-Yuan; Peng, Shih-Jung; Lin, Hsin-Yao; Pasricha, Pankaj J; Tang, Shiue-Cheng

    2013-01-01

    Because of the dispersed nature of nerves and blood vessels, standard histology cannot provide a global and associated observation of the enteric nervous system (ENS) and vascular network. We prepared transparent mouse intestine and combined vessel painting and three-dimensional (3-D) neurohistology for joint visualization of the ENS and vasculature. Cardiac perfusion of the fluorescent wheat germ agglutinin (vessel painting) was used to label the ileal blood vessels. The pan-neuronal marker PGP9.5, sympathetic neuronal marker tyrosine hydroxylase (TH), serotonin, and glial markers S100B and GFAP were used as the immunostaining targets of neural tissues. The fluorescently labeled specimens were immersed in the optical clearing solution to improve photon penetration for 3-D confocal microscopy. Notably, we simultaneously revealed the ileal microstructure, vasculature, and innervation with micrometer-level resolution. Four examples are given: 1) the morphology of the TH-labeled sympathetic nerves: sparse in epithelium, perivascular at the submucosa, and intraganglionic at myenteric plexus; 2) distinct patterns of the extrinsic perivascular and intrinsic pericryptic innervation at the submucosal-mucosal interface; 3) different associations of serotonin cells with the mucosal neurovascular elements in the villi and crypts; and 4) the periganglionic capillary network at the myenteric plexus and its contact with glial fibers. Our 3-D imaging approach provides a useful tool to simultaneously reveal the nerves and blood vessels in a space continuum for panoramic illustration and analysis of the neurovascular complex to better understand the intestinal physiology and diseases.

  20. Effective classification of 3D image data using partitioning methods

    NASA Astrophysics Data System (ADS)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  1. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  2. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  3. Needle tip visibility in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Arif, Muhammad; Moelker, Adriaan; van Walsum, Theo

    2017-03-01

    Needle visibility is of crucial importance for ultrasound guided interventional procedures. However, several factors, such as shadowing by bone or gas and tissue echogenic properties similar to needles, may compromise needle visibility. Additionally, small angle between the ultrasound beam and the needle, as well as small gauged needles may reduce visibility. Variety in needle tips design may also affect needle visibility. Whereas several studies have investigated needle visibility in 2D ultrasound imaging, no data is available for 3D ultrasound imaging, a modality that has great potential for image guidance interventions1. In this study, we evaluated needle visibility using a 3D ultrasound transducer. We examined different needles in a tissue mimicking liver phantom at three angles (200, 550 and 900) and quantify their visibility. The liver phantom was made by 5% polyvinyl alcohol solution containing 1% Silica gel particles to act as ultrasound scattering particles. We used four needles; two biopsy needles (Quick core 14G and 18G), one Ablation needle (Radiofrequency Ablation 17G), and Initial puncture needle (IP needle 17G). The needle visibility was quantified by calculating contrast to noise ratio. The results showed that the visibility for all needles were almost similar at large angles. However the difference in visibility at lower angles is more prominent. Furthermore, the visibility increases with the increase in angle of ultrasound beam with needles.

  4. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  5. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  6. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  7. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  8. 3-D Imaging of Partly Concealed Targets by Laser Radar

    DTIC Science & Technology

    2005-10-01

    laser in the green wavelength region was used for illumination. 3-D Imaging of Partly Concealed Targets by Laser Radar 11 - 8 RTO-MP-SET-094...acknowledge Marie Carlsson and Ann Charlotte Gustavsson for their assistance in some of the experiments. 7.0 REFERENCES [1] U. Söderman, S. Ahlberg...SPIE Vol. 3707, pp. 432-448, USA, 1999. [14] D. Letalick, H. Larsson, M. Carlsson, and A.-C. Gustavsson , “Laser sensors for urban warfare,” FOI

  9. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  10. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  11. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    NASA Astrophysics Data System (ADS)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  12. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  13. Application of 3D surface imaging in breast cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja; Honnef, Joeri; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2012-02-01

    Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam computed tomography (CBCT). Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included. The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning CT surface and registered CBCT/captured surface. Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior direction, respectively. Systematic and random errors were <=0.16cm and <=0.13cm in all directions, respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE values than surface imaging registrations (p=0.003). Conclusion: This performance evaluation study shows very promising results

  14. Development of a 3-D Measuring System for Upper Limb Movements Using Image Processing

    NASA Astrophysics Data System (ADS)

    Ogata, Kohichi; Toume, Tadashi; Nakanishi, Ryoji

    This paper describes a 3-D motion capture system for the quantitative evaluation of a finger-nose test using image processing. In the field of clinical medicine, qualitative and quantitative evaluation of voluntary movements is necessary for correct diagnosis of disorders. For this purpose, we have developed a 3-D measuring system with a multi-camera system. The configuration of the system is described and examples of movement data are shown for normal subjects and patients. In the finger-nose test at a fast trial speed, a discriminant analysis using Maharanobis generalized distances shows a discriminant rate of 93% between normal subjects and spinocerebellar degeneration(SCD) patients.

  15. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  16. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning.

  17. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  18. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  19. Imaging Shallow Salt With 3D Refraction Migration

    NASA Astrophysics Data System (ADS)

    Vanschuyver, C. J.; Hilterman, F. J.

    2005-05-01

    In offshore West Africa, numerous salt walls are within 200 m of sea level. Because of the shallowness of these salt walls, reflections from the salt top can be difficult to map, making it impossible to build an accurate velocity model for subsequent pre-stack depth migration. An accurate definition of salt boundaries is critical to any depth model where salt is present. Unfortunately, when a salt body is very shallow, the reflection from the upper interface can be obscured due to large offsets between the source and near receivers and also due to the interference from multiples and other near-surface noise events. A new method is described using 3D migration of the refraction waveforms which is simplified because of several constraints in the model definition. The azimuth and dip of the refractor is found by imaging with Kirchhoff theory. A Kirchhoff migration is performed where the traveltime values are adjusted to use the CMP refraction traveltime equation. I assume the sediment and salt velocities to be known such that once the image time is specified, then the dip and azimuth of the refraction path can be found. The resulting 3D refraction migrations are in excellent depth agreement with available well control. In addition, the refraction migration time picks of deeper salt events are in agreement with time picks of the same events on the reflection migration.

  20. Experiments on terahertz 3D scanning microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  1. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable.

  2. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  3. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  4. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  5. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    PubMed

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation

    PubMed Central

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2015-01-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  7. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  8. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  9. Using 3-D OFEM for movement correction and quantitative evaluation in dynamic cardiac NH3 PET images

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yang, Bang-Hung; Chen, Chih-Hao; Wu, Liang-Chih; Liu, Ren-Shyan; Chung, Being-Tau; Lin, Kang-Ping

    2005-04-01

    Various forms of cardiac pathology, such as myocardial ischemia and infarction, can be characterized with 13NH3-PET images. In clinical situation, polar map (bullseye image), which derived by combining images from multiple planes (designated by the circle around the myocardium in the above images), so that information of the entire myocardium can be displayed in a single image for diagnosis. However, image artifact problem always arises from body movement or breathing motion in image acquisition period and results in indefinite myocardium disorder region shown in bullseye image. In this study, a 3-D motion and movement correction method is developed to solve the image artifact problem to improve the accuracy of diagnostic bullseye image. The proposed method is based on 3-D optical flow estimation method (OFEM) and cooperates with the particular dynamic imaging protocol, which snaps serial PET images (5 frames) in later half imaging period. The 3-D OFEM assigns to each image point in the visual 3-D flow velocity field, which associates with the non-rigid motion of the time-varying brightness of a sequence of images. It presents vectors of corresponding images position between frames for motion correction. To validate the performance of proposed method, 10 normal and 20 abnormal whole-body dynamic PET imaging studies were applied, and the results show that the bullseye images, which generated by corrected images, present clear and definite tissue region for clinical diagnosis.

  10. Complex Resistivity 3D Imaging for Ground Reinforcement Site

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Park, S.

    2012-12-01

    Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement

  11. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  12. High Time Resolution Photon Counting 3D Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of <30 μm FWHM ( 1 x106 gain) and single event timing resolution of 100 ps (FWHM). The relatively low MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  13. SU-F-I-14: 3D Breast Digital Phantom for XACT Imaging

    SciTech Connect

    Tang, S; Laaroussi, R; Chen, J; Samant, P; Xiang, L; Chen, Y; Ahmad, S; Yang, K

    2016-06-15

    Purpose: The X-ray induced acoustic computed tomography (XACT) is a new imaging modality which combines X-ray contrast and high ultrasonic resolution in a single modality. Using XACT in breast imaging, a 3D breast volume can be imaged by only one pulsed X-ray radiation, which could dramatically reduce the imaging dose for patients undergoing breast cancer screening and diagnosis. A 3D digital phantom that contains both X-ray properties and acoustic properties of different tissue types is indeed needed for developing and optimizing the XACT system. The purpose of this study is to offer a realistic breast digital phantom as a valuable tool for improving breast XACT imaging techniques and potentially leading to better diagnostic outcomes. Methods: A series of breast CT images along the coronal plane from a patient who has breast calcifications are used as the source images. A HU value based segmentation algorithm is employed to identify breast tissues in five categories, namely the skin tissue, fat tissue, glandular tissue, chest bone and calcifications. For each pixel, the dose related parameters, such as material components and density, and acoustic related parameters, such as frequency-dependent acoustic attenuation coefficient and bandwidth, are assigned based on tissue types. Meanwhile, other parameters which are used in sound propagation, including the sound speed, thermal expansion coefficient, and heat capacity are also assigned to each tissue. Results: A series of 2D tissue type image is acquired first and the 3D digital breast phantom is obtained by using commercial 3D reconstruction software. When giving specific settings including dose depositions and ultrasound center frequency, the X-ray induced initial pressure rise can be calculated accordingly. Conclusion: The proposed 3D breast digital phantom represents a realistic breast anatomic structure and provides a valuable tool for developing and evaluating the system performance for XACT.

  14. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  15. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  16. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    PubMed Central

    Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2013-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  17. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  18. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  19. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  20. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  1. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  2. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360° rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  3. Automatic airline baggage counting using 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  4. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  5. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  6. Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments

    PubMed Central

    Welf, Erik S.; Driscoll, Meghan K.; Dean, Kevin M.; Schäfer, Claudia; Chu, Jun; Davidson, Michael W.; Lin, Michael Z.; Danuser, Gaudenz; Fiolka, Reto

    2016-01-01

    The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments. PMID:26906741

  7. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  8. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-06

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined.

  9. A new 3D texture feature based computer-aided diagnosis approach to differentiate pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Zhao, Hong; Liang, Zhengrong

    2013-02-01

    To distinguish malignant pulmonary nodules from benign ones is of much importance in computer-aided diagnosis of lung diseases. Compared to many previous methods which are based on shape or growth assessing of nodules, this proposed three-dimensional (3D) texture feature based approach extracted fifty kinds of 3D textural features from gray level, gradient and curvature co-occurrence matrix, and more derivatives of the volume data of the nodules. To evaluate the presented approach, the Lung Image Database Consortium public database was downloaded. Each case of the database contains an annotation file, which indicates the diagnosis results from up to four radiologists. In order to relieve partial-volume effect, interpolation process was carried out to those volume data with image slice thickness more than 1mm, and thus we had categorized the downloaded datasets to five groups to validate the proposed approach, one group of thickness less than 1mm, two types of thickness range from 1mm to 1.25mm and greater than 1.25mm (each type contains two groups, one with interpolation and the other without). Since support vector machine is based on statistical learning theory and aims to learn for predicting future data, so it was chosen as the classifier to perform the differentiation task. The measure on the performance was based on the area under the curve (AUC) of Receiver Operating Characteristics. From 284 nodules (122 malignant and 162 benign ones), the validation experiments reported a mean of 0.9051 and standard deviation of 0.0397 for the AUC value on average over 100 randomizations.

  10. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  11. Military efforts in nanosensors, 3D printing, and imaging detection

    NASA Astrophysics Data System (ADS)

    Edwards, Eugene; Booth, Janice C.; Roberts, J. Keith; Brantley, Christina L.; Crutcher, Sihon H.; Whitley, Michael; Kranz, Michael; Seif, Mohamed; Ruffin, Paul

    2017-04-01

    A team of researchers and support organizations, affiliated with the Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), has initiated multidiscipline efforts to develop nano-based structures and components for advanced weaponry, aviation, and autonomous air/ground systems applications. The main objective of this research is to exploit unique phenomena for the development of novel technology to enhance warfighter capabilities and produce precision weaponry. The key technology areas that the authors are exploring include nano-based sensors, analysis of 3D printing constituents, and nano-based components for imaging detection. By integrating nano-based devices, structures, and materials into weaponry, the Army can revolutionize existing (and future) weaponry systems by significantly reducing the size, weight, and cost. The major research thrust areas include the development of carbon nanotube sensors to detect rocket motor off-gassing; the application of current methodologies to assess materials used for 3D printing; and the assessment of components to improve imaging seekers. The status of current activities, associated with these key areas and their implementation into AMRDEC's research, is outlined in this paper. Section #2 outlines output data, graphs, and overall evaluations of carbon nanotube sensors placed on a 16 element chip and exposed to various environmental conditions. Section #3 summarizes the experimental results of testing various materials and resulting components that are supplementary to additive manufacturing/fused deposition modeling (FDM). Section #4 recapitulates a preliminary assessment of the optical and electromechanical components of seekers in an effort to propose components and materials that can work more effectively.

  12. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  13. Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.

    PubMed

    Brun, Francesco; Dreossi, Diego

    2010-01-01

    Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications.

  14. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  15. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  16. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  17. 3D lesion insertion in digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Vaz, Michael S.; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data. To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is agnostic to the DBT reconstruction algorithm employed.

  18. HIPERCIR: a low-cost high-performance 3D radiology image analysis system

    NASA Astrophysics Data System (ADS)

    Blanquer, Ignacio; Hernandez, Vincente; Ramirez, Javier; Vidal, Antonio M.; Alcaniz-Raya, Mariano L.; Grau Colomer, Vincente; Monserrat, Carlos A.; Concepcion, Luis; Marti-Bonmati, Luis

    1999-07-01

    Clinics have to deal currently with hundreds of 3D images a day. The processing and visualization using currently affordable systems is very costly and slow. The present work shows the features of a software integrated parallel computing package developed at the Universidad Politecnica de Valencia (UPV), under the European Project HIPERCIR, which is aimed at reducing the time and requirements for processing and visualizing the 3D images with low-cost solutions, such as networks of PCs running standard operating systems. HIPERCIR is targeted to Radiology Departments of Hospitals and Radiology System Providers to provide them with a tool for easing the day-to-day diagnosis. This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Computing Systems Department of the UPV, experts on biomedical software and radiology and tomography clinic experts.

  19. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    NASA Astrophysics Data System (ADS)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  20. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  1. Filters in 2D and 3D Cardiac SPECT Image Processing.

    PubMed

    Lyra, Maria; Ploussi, Agapi; Rouchota, Maritina; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.

  2. Filters in 2D and 3D Cardiac SPECT Image Processing

    PubMed Central

    Ploussi, Agapi; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast. PMID:24804144

  3. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  4. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  5. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  6. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  7. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    PubMed

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  8. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  9. 3D Seismic Imaging over a Potential Collapse Structure

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  10. Venus Topography in 3D: Imaging of Coronae and Chasmata

    NASA Astrophysics Data System (ADS)

    Jurdy, D. M.; Stefanick, M.; Stoddard, P. R.

    2006-12-01

    Venus' surface hosts hundreds of circular to elongate features, ranging from 60-2600 km, and averaging somewhat over 200 km, in diameter. These enigmatic structures have been classified as "coronae" and attributed to either tectono-volcanic or impact-related mechanisms. A linear to arcuate system of chasmata - rugged zones with some of Venus' deepest troughs, extend 1000's of kilometers. They have extreme relief, with elevations changing as much as 7 km in just 30 km distance. The 54,464 km-long Venus chasmata system defined in great detail by Magellan can be fit by great circle arcs at the 89.6% level, and when corrected for the smaller size of the planet, the total length of the chasmata system measures within 2.7% of the length of Earth's spreading ridges. The relatively young Beta-Atla-Themis region (BAT), within 30° of the equator from 180-300° longitude has the planet's strongest geoid highs and profuse volcanism. This BAT region, the intersection of three rift zones, also has a high coronal concentration, with individual coronae closely associated with the chasmata system. The chasmata with the greatest relief on Venus show linear rifting that prevailed in the latest stage of tectonic deformation. For a three-dimensional view of Venus' surface, we spread out the Magellan topography on a flat surface using a Mercator projection to preserve shape. Next we illuminate the surface with beams at angle 45° from left (or right) so as to simulate mid afternoon (or mid-morning). Finally, we observe the surface with two eyes looking through orange and azure colored filters respectively. This gives a 3D view of tectonic features in the BAT area. The 3D images clearly show coronae sharing boundaries with the chasmata. This suggests that the processes of rifting and corona-formation occur together. It seems unlikely that impact craters would create this pattern.

  11. Advanced 3D polarimetric flash ladar imaging through foliage

    NASA Astrophysics Data System (ADS)

    Murray, James T.; Moran, Steven E.; Roddier, Nicolas; Vercillo, Richard; Bridges, Robert; Austin, William

    2003-08-01

    High-resolution three-dimensional flash ladar system technologies are under development that enables remote identification of vehicles and armament hidden by heavy tree canopies. We have developed a sensor architecture and design that employs a 3D flash ladar receiver to address this mission. The receiver captures 128×128×>30 three-dimensional images for each laser pulse fired. The voxel size of the image is 3"×3"×4" at the target location. A novel signal-processing algorithm has been developed that achieves sub-voxel (sub-inch) range precision estimates of target locations within each pixel. Polarization discrimination is implemented to augment the target-to-foliage contrast. When employed, this method improves the range resolution of the system beyond the classical limit (based on pulsewidth and detection bandwidth). Experiments were performed with a 6 ns long transmitter pulsewidth that demonstrate 1-inch range resolution of a tank-like target that is occluded by foliage and a range precision of 0.3" for unoccluded targets.

  12. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    NASA Astrophysics Data System (ADS)

    Ranjan Gartia, Manas; Hsiao, Austin; Sivaguru, Mayandi; Chen, Yi; Logan Liu, G.

    2011-09-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  13. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  14. 3D crack aperture distribution from a nuclear imaging method

    NASA Astrophysics Data System (ADS)

    Sardini, Paul; Kuva, Jukka; Siitari-Kauppi, Marja; Bonnet, Marine; Hellmuth, Karl-Heinz

    2017-04-01

    Cracks in solid rocks are multi-scale entities because of their spatial, length and aperture distributions. Aperture distributions of cracks are not well known because their full aperture range (<0.1 µm to >1 mm) is not accessible using common imaging techniques, such as SEM or X-Ray computed micro-tomography. Knowing the aperture distribution or cracks is, however, highly relevant to understanding flow in rocks. In crystalline rocks the lack of knowledge about the crack aperture distribution keeps us from a clear understanding of the relationships of porosity and permeability. A nuclear imaging method based on the full saturation of connected rock porosity by a 14C-doped resin (the 14-C PMMA method) allows detecting the connected microcrack network using autoradiography. Even if cracks are detected only on 2D sections, an estimate of the 3D aperture distribution of these cracks is possible. To this end, a set of "artificial crack" standards was prepared and investigated. These standards consisted of a PMMA layer of known thickness between two glass plates. Analysis of experimental autoradiographic profiles around these artificial cracks allows determination of their aperture. This methodology was then applied to different rock samples, mainly granitic ones.

  15. Multiframe image point matching and 3-d surface reconstruction.

    PubMed

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  16. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures.

  17. 3D-3D registration of partial capitate bones using spin-images

    NASA Astrophysics Data System (ADS)

    Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

    2013-03-01

    It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

  18. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  19. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  20. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  1. Proximal femoral focal deficiency of the fetus - early 3D/4D prenatal ultrasound diagnosis.

    PubMed

    Kudla, Marek J; Beczkowska-Kielek, Aleksandra; Kutta, Katarzyna; Partyka-Lasota, Justyna

    2016-09-01

    Proximal Femoral Focal Deficiency (PFFD) is a rare congenital syndrome of unknown etiology. Additional disorders can be present up to 70% of PFFD cases. Management (including termination) depends on the severity of the malformation. We present a case of a 32-year-old woman referred for routine ultrasound examination in the 12th week of pregnancy. Detailed 3D/4D evaluation revealed asymmetry of lower limbs and diagnosis of isolated PFFD was established. Parents were fully informed and decided to continue the pregnancy. We stress here the importance of early 3D/4D ultrasound diagnosis. Our paper presents the earliest case where the diagnosis of PFFD was established with 3D/4D ultrasound.

  2. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  3. 3D printed auto-mixing chip enables rapid smartphone diagnosis of anemia.

    PubMed

    Plevniak, Kimberly; Campbell, Matthew; Myers, Timothy; Hodges, Abby; He, Mei

    2016-09-01

    Clinical diagnosis requiring central facilities and site visits can be burdensome for patients in resource-limited or rural areas. Therefore, development of a low-cost test that utilizes smartphone data collection and transmission would beneficially enable disease self-management and point-of-care (POC) diagnosis. In this paper, we introduce a low-cost iPOC(3D) diagnostic strategy which integrates 3D design and printing of microfluidic POC device with smartphone-based disease diagnosis in one process as a stand-alone system, offering strong adaptability for establishing diagnostic capacity in resource-limited areas and low-income countries. We employ smartphone output (AutoCAD 360 app) and readout (color-scale analytical app written in-house) functionalities for rapid 3D printing of microfluidic auto-mixers and colorimetric detection of blood hemoglobin levels. The auto-mixing of reagents with blood via capillary force has been demonstrated in 1 second without the requirement of external pumps. We employed this iPOC(3D) system for point-of-care diagnosis of anemia using a training set of patients (nanemia = 16 and nhealthy = 6), which showed consistent measurements of blood hemoglobin levels (a.u.c. = 0.97) and comparable diagnostic sensitivity and specificity, compared with standard clinical hematology analyzer. Capable of 3D fabrication flexibility and smartphone compatibility, this work presents a novel diagnostic strategy for advancing personalized medicine and mobile healthcare.

  4. Imaging articular cartilage defects with 3D fat-suppressed echo planar imaging: comparison with conventional 3D fat-suppressed gradient echo sequence and correlation with histology.

    PubMed

    Trattnig, S; Huber, M; Breitenseher, M J; Trnka, H J; Rand, T; Kaider, A; Helbich, T; Imhof, H; Resnick, D

    1998-01-01

    Our goal was to shorten examination time in articular cartilage imaging by use of a recently developed 3D multishot echo planar imaging (EPI) sequence with fat suppression (FS). We performed comparisons with 3D FS GE sequence using histology as the standard of reference. Twenty patients with severe gonarthrosis who were scheduled for total knee replacement underwent MRI prior to surgery. Hyaline cartilage was imaged with a 3D FS EPI and a 3D FS GE sequence. Signal intensities of articular structures were measured, and contrast-to-noise (C/N) ratios were calculated. Each knee was subdivided into 10 cartilage surfaces. From a total of 188 (3D EPI sequence) and 198 (3D GE sequence) cartilage surfaces, 73 and 79 histologic specimens could be obtained and analyzed. MR grading of cartilage lesions on both sequences was based on a five grade classification scheme and compared with histologic grading. The 3D FS EPI sequence provided a high C/N ratio between cartilage and subchondral bone similar to that of the 3D FS GE sequence. The C/N ratio between cartilage and effusion was significantly lower on the 3D EPI sequence due to higher signal intensity of fluid. MR grading of cartilage abnormalities using 3D FS EPI and 3D GE sequence correlated well with histologic grading. 3D FS EPI sequence agreed within one grade in 69 of 73 (94.5%) histologically proven cartilage lesions and 3D FS GE sequence agreed within one grade in 76 of 79 (96.2%) lesions. The gradings were identical in 38 of 73 (52.1%) and in 46 of 79 (58.3%) cases, respectively. The difference between the sensitivities was statistically not significant. The 3D FS EPI sequence is comparable with the 3D FS GE sequence in the noninvasive evaluation of advanced cartilage abnormalities but reduces scan time by a factor of 4.

  5. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  6. Accurate Diagnosis of Severe Hypospadias Using 2D and 3D Ultrasounds

    PubMed Central

    López Ramón y Cajal, Carlos; Marín Ortiz, Elena; Sarmiento Carrera, Nerea

    2016-01-01

    The hypospadias is the most common urogenital anomaly of male neonates but the prenatal diagnosis of this is often missed before birth. We present the prenatal diagnosis of a severe penoscrotal hypospadias using 2D and 3D ultrasounds. 3D sonography allowed us the best evaluation of the genitals and their anatomical relations. This ample detailed study allowed us to show the findings to the parents and the pediatric surgeon and to configure the best information about the prognosis and surgical treatment. PMID:27774326

  7. Comparison of bootstrap resampling methods for 3-D PET imaging.

    PubMed

    Lartizien, C; Aubin, J-B; Buvat, I

    2010-07-01

    Two groups of bootstrap methods have been proposed to estimate the statistical properties of positron emission tomography (PET) images by generating multiple statistically equivalent data sets from few data samples. The first group generates resampled data based on a parametric approach assuming that data from which resampling is performed follows a Poisson distribution while the second group consists of nonparametric approaches. These methods either require a unique original sample or a series of statistically equivalent data that can be list-mode files or sinograms. Previous reports regarding these bootstrap approaches suggest different results. This work compares the accuracy of three of these bootstrap methods for 3-D PET imaging based on simulated data. Two methods are based on a unique file, namely a list-mode based nonparametric (LMNP) method and a sinogram based parametric (SP) method. The third method is a sinogram-based nonparametric (SNP) method. Another original method (extended LMNP) was also investigated, which is an extension of the LMNP methods based on deriving a resampled list-mode file by drawings events from multiple original list-mode files. Our comparison is based on the analysis of the statistical moments estimated on the repeated and resampled data. This includes the probability density function and the moments of order 1 and 2. Results show that the two methods based on multiple original data (SNP and extended LMNP) are the only methods that correctly estimate the statistical parameters. Performances of the LMNP and SP methods are variable. Simulated data used in this study were characterized by a high noise level. Differences among the tested strategies might be reduced with clinical data sets with lower noise.

  8. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  9. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  10. Calculation of strain images of a breast-mimicking phantom from 3D CT image data.

    PubMed

    Kim, Jae G; Aowlad Hossain, A B M; Shin, Jong H; Lee, Soo Y

    2012-09-01

    Elastography is a medical imaging modality to visualize the elasticity of soft tissues. Ultrasound and MRI have been exclusively used for elastography of soft tissues since they can sensitize the tissues' minute displacements of an order of μm. It is known that ultrasound and MRI elastography show cancerous tissues with much higher contrast than conventional ultrasound and MRI. To evaluate possibility of combining elastography with x-ray imaging, we have calculated strain images of a breast-mimicking phantom from its 3D CT image data. We first simulated the x-ray elastography using a FEM model which incorporated both the elasticity and x-ray attenuation behaviors of breast tissues. After validating the x-ray elastography scheme by simulation, we made a breast-mimicking phantom that contained a hard inclusion against soft background. With a micro-CT, we took 3D images of the phantom twice, changing the compressing force to the phantom. From the two 3D phantom images taken with two different compression ratios, we calculated the displacement vector maps that represented the compression-induced pixel displacements. In calculating the displacement vectors, we tracked the movements of image feature patterns from the less-compressed-phantom images to the more-compressed-phantom images using the 3D image correlation technique. We obtained strain images of the phantom by differentiating the displacement vector maps. The FEM simulation has shown that x-ray strain imaging is possible by tracking image feature patterns in the 3D CT images of the breast-mimicking phantom. The experimental displacement and strain images of a breast-mimicking phantom, obtained from the 3D micro-CT images taken with 0%-3% compression ratios, show behaviors similar to the FEM simulation results. The contrast and noise performance of the strain images improves as the phantom compression ratio increases. We have experimentally shown that we can improve x-ray strain image quality by applying 3D

  11. Deformable M-Reps for 3D Medical Image Segmentation

    PubMed Central

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  12. 3D-imaging of the knee with an optimized 3D-FSE-sequence and a 15-channel knee-coil.

    PubMed

    Notohamiprodjo, Mike; Horng, Annie; Kuschel, Bernhard; Paul, Dominik; Li, Guobin; Raya, Jose G; Reiser, Maximilian F; Glaser, Christian

    2012-11-01

    To evaluate the clinical usefulness of an optimized 3D-Fast-Spin-Echo-sequence (3D-SPACE) in combination with a 15-channel knee-coil for 3D-imaging of the knee at 3T. 15 volunteers and 50 consecutive patients were examined at 3 T with fat-saturated moderately T2-weighted 3D-SPACE (Voxel-size (VS): 0.6 mm×0.5 mm×0.5 mm/acquisition-time (AT) 10:44 min) using a 15-channel knee-coil. Flip angle optimization and radial k-space reordering were applied. Signal- and contrast-to-noise-ratios (SNR, CNR) were compared to non-optimized 3D-SPACE (8-channel knee-coil) and conventional 2D-FSE (VS: 0.4 mm×0.4 mm×3 mm/total AT: 12 min). Two radiologists independently rated depiction of internal knee structures and assessed detection and depiction of cartilage and meniscus abnormalities compared to conventional 2D-FSE-sequences. Sensitivity and specificity were calculated for a subgroup with arthroscopy as reference standard. Statistical analysis was performed with paired t-tests, confidence intervals and weighted-κ-coefficients. SNR and CNR particularly of fluid/cartilage of optimized 3D-SPACE were significantly higher (p<0.05) than of the non-optimized 3D-sequence and conventional 2D-sequence. Blurring and image inhomogeneity were reduced in the optimized sequence. The thin slice-thickness was beneficial for depiction of problematical anatomical structures such as meniscal roots. 3D-SPACE showed significantly higher diagnostic confidence (p<0.05) for diagnosis of cartilage lesions of the femoral trochlea. Overall sensitivity and specificity of 3D-SPACE and 2D-FSE for cartilage lesions was 82.3%/80.2% and 79.4%/84.2% and 100%/86.4% and 92.3%/81.8% for meniscus lesions. Optimized 3D-SPACE provides significantly higher signal and contrast compared to conventional 2D-FSE, particularly for fluid and cartilage, leading to improved diagnostic confidence, particularly in problematic areas, such as the femoral trochlea. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Database guided detection of anatomical landmark points in 3D images of the heart

    NASA Astrophysics Data System (ADS)

    Karavides, Thomas; Esther Leung, K. Y.; Paclik, Pavel; Hendriks, Emile A.; Bosch, Johan G.

    2010-03-01

    Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D) echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The method can detect landmarks accurately in the four-chamber (apex: 7.9+/-7.1mm, septal mitral valve point: 5.6+/-2.7mm lateral mitral valve point: 4.0+/-2.6mm) and two-chamber view (apex: 7.1+/-6.7mm, anterior mitral valve point: 5.8+/-3.5mm, inferior mitral valve point: 4.5+/-3.1mm). The results compare well to those reported by others.

  14. Dual optimization based prostate zonal segmentation in 3D MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-05-01

    Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. 3D kinematics of the tarsal joints from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Hirsch, Bruce E.; Udupa, Jayaram K.; Okereke, Enyi; Hillstrom, Howard J.; Siegler, Sorin; Ringleb, Stacie I.; Imhauser, Carl W.

    2001-09-01

    We have developed a method for analyzing motion at skeletal joints based on the 3D reconstruction of magnetic resonance (MR) image data. Since the information about each voxel in MR images includes its location in the scanner, it follows that information is available for each organ whose 3D surface is computed from a series of MR slices. In addition, there is information on the shape and orientation of each organ, and the contact areas of adjacent bones. By collecting image data in different positions we can calculate the motion of the individual bones. We have used this method to study human foot bones, in order to understand normal and abnormal foot function. It has been used to evaluate patients with tarsal coalitions, various forms of pes planus, ankle sprains, and several other conditions. A newly described feature of this system is the ability to visualize the contact area at a joint, as determined by the region of minimum distance. The display of contact area helps understand abnormal joint function. Also, the use of 3D imaging reveals motions in joints which cannot otherwise be visualized, such as the subtalar joint, for more accurate diagnosis of joint injury.

  16. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  17. Computer-aided diagnosis of pulmonary nodules on CT scans: segmentation and classification using 3D active contours.

    PubMed

    Way, Ted W; Hadjiiski, Lubomir M; Sahiner, Berkman; Chan, Heang-Ping; Cascade, Philip N; Kazerooni, Ella A; Bogot, Naama; Zhou, Chuan

    2006-07-01

    We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface, (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (A(z)) of 0.83 +/- 0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D

  18. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours

    PubMed Central

    Way, Ted W.; Hadjiiski, Lubomir M.; Sahiner, Berkman; Chan, Heang-Ping; Cascade, Philip N.; Kazerooni, Ella A.; Bogot, Naama; Zhou, Chuan

    2009-01-01

    We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface, (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (Az) of 0.83±0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D AC

  19. TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images

    SciTech Connect

    Akhbardeh, A; Parekth, VS; Jacobs, MA

    2015-06-15

    Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were used in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was

  20. Fast 3D T2 -weighted imaging using variable flip angle transition into driven equilibrium (3D T2 -TIDE) balanced SSFP for prostate imaging at 3T.

    PubMed

    Srinivasan, Subashini; Wu, Holden H; Sung, Kyunghyun; Margolis, Daniel J A; Ennis, Daniel B

    2015-08-01

    Three-dimensional (3D) T2 -weighted fast spin echo (FSE) imaging of the prostate currently requires long acquisition times. Our objective was to develop a fast 3D T2 -weighted sequence for prostate imaging at 3T using a variable flip angle transition into driven equilibrium (T2 -TIDE) scheme. 3D T2 -TIDE uses interleaved spiral-out phase encode ordering to efficiently sample the ky -kz phase encodes and also uses the transient balanced steady-state free precession signal to acquire the center of k-space for T2 -weighted imaging. Bloch simulations and images from 10 healthy subjects were acquired to evaluate the performance of 3D T2 -TIDE compared to 3D FSE. 3D T2 -TIDE images were acquired in 2:54 minutes compared to 7:02 minutes for 3D FSE with identical imaging parameters. The signal-to-noise ratio (SNR) efficiency was significantly higher for 3D T2 -TIDE compared to 3D FSE in nearly all tissues, including periprostatic fat (45 ± 12 vs. 31 ± 7, P < 0.01), gluteal fat (48 ± 8 vs. 41 ± 10, P = 0.12), right peripheral zone (20 ± 4 vs. 16 ± 8, P = 0.12), left peripheral zone (17 ± 2 vs. 12 ± 3, P < 0.01), and anterior fibromuscular stroma (12 ± 4 vs. 4 ± 2, P < 0.01). 3D T2 -TIDE images of the prostate can be acquired quickly with SNR efficiency that exceeds that of 3D FSE. © 2014 Wiley Periodicals, Inc.

  1. 3D shape analysis for early diagnosis of malignant lung nodules.

    PubMed

    El-Baz, Ayman; Nitzken, Matthew; Elnakib, Ahmed; Khalifa, Fahmi; Gimel'farb, Georgy; Falk, Robert; El-Ghar, Mohamed Abou

    2011-01-01

    An alternative method of diagnosing malignant lung nodules by their shape, rather than conventional growth rate, is proposed. The 3D surfaces of the detected lung nodules are delineated by spherical harmonic analysis that represents a 3D surface of the lung nodule supported by the unit sphere with a linear combination of special basis functions, called Spherical Harmonics (SHs). The proposed 3D shape analysis is carried out in five steps: (i) 3D lung nodule segmentation with a deformable 3D boundary controlled by a new prior visual appearance model; (ii) 3D Delaunay triangulation to construct a 3D mesh model of the segmented lung nodule surface; (iii) mapping this model to the unit sphere; (iv) computing the SHs for the surface; and (v) determining the number of the SHs to delineate the lung nodule. We describe the lung nodule shape complexity with a new shape index, the estimated number of the SHs, and use it for the K-nearest classification into malignant and benign lung nodules. Preliminary experiments on 327 lung nodules (153 malignant and 174 benign) resulted in a classification accuracy of 93.6%, showing that the proposed method is a promising supplement to current technologies for the early diagnosis of lung cancer.

  2. 3D shape analysis for early diagnosis of malignant lung nodules.

    PubMed

    El-Bazl, Ayman; Nitzken, Matthew; Khalifa, Fahmi; Elnakib, Ahmed; Gimel'farb, Georgy; Falk, Robert; El-Ghar, Mohammed Abo

    2011-01-01

    An alternative method for diagnosing malignant lung nodules by their shape rather than conventional growth rate is proposed. The 3D surfaces of the detected lung nodules are delineated by spherical harmonic analysis, which represents a 3D surface of the lung nodule supported by the unit sphere with a linear combination of special basis functions, called spherical harmonics (SHs). The proposed 3D shape analysis is carried out in five steps: (i) 3D lung nodule segmentation with a deformable 3D boundary controlled by two probabilistic visual appearance models (the learned prior and the estimated current appearance one); (ii) 3D Delaunay triangulation to construct a 3D mesh model of the segmented lung nodule surface; (iii) mapping this model to the unit sphere; (iv) computing the SHs for the surface, and (v) determining the number of the SHs to delineate the lung nodule. We describe the lung nodule shape complexity with a new shape index, the estimated number of the SHs, and use it for the K-nearest classification to distinguish malignant and benign lung nodules. Preliminary experiments on 327 lung nodules (153 malignant and 174 benign) resulted in the 93.6% correct classification (for the 95% confidence interval), showing that the proposed method is a promising supplement to current technologies for the early diagnosis of lung cancer.

  3. Automatic cerebrospinal fluid segmentation in non-contrast CT images using a 3D convolutional network

    NASA Astrophysics Data System (ADS)

    Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra

    2017-03-01

    Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.

  4. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  5. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  6. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  7. 3D tomographic breast imaging in-vivo using a handheld optical imager

    NASA Astrophysics Data System (ADS)

    Erickson, Sarah J.; Martinez, Sergio; Gonzalez, Jean; Roman, Manuela; Nunez, Annie; Godavarty, Anuradha

    2011-02-01

    Hand-held optical imagers are currently developed toward clinical imaging of breast tissue. However, the hand-held optical devices developed to are not able to coregister the image to the tissue geometry for 3D tomography. We have developed a hand-held optical imager which has demonstrated automated coregistered imaging and 3D tomography in phantoms, and validated coregistered imaging in normal human subjects. Herein, automated coregistered imaging is performed in a normal human subject with a 0.45 cm3 spherical target filled with 1 μM indocyanine green (fluorescent contrast agent) placed superficially underneath the flap of the breast tissue. The coregistered image data is used in an approximate extended Kalman filter (AEKF) based reconstruction algorithm to recover the 3D location of the target within the breast tissue geometry. The results demonstrate the feasibility of performing 3D tomographic imaging and recovering a fluorescent target in breast tissue of a human subject for the first time using a hand-held based optical imager. The significance of this work is toward clinical imaging of breast tissue for cancer diagnostics and therapy monitoring.

  8. Implementation of 3D prostrate ring-scanning mechanism for NIR diffuse optical imaging phantom validation

    NASA Astrophysics Data System (ADS)

    Yu, Jhao-Ming; Chen, Liang-Yu; Pan, Min-Cheng; Hsu, Ya-Fen; Pan, Min-Chun

    2015-03-01

    Diffuse optical imaging (DOI) providing functional information of tissues has drawn great attention for the last two decades. Near infrared (NIR) DOI systems composed of scanning bench, opt-electrical measurement module, system control, and data processing and image reconstruction schemes are developed for the screening and diagnosis of breast tumors. Mostly, the scanning bench belonging to fixed source-and-detector configuration limits computed image resolution to an extent. To cope with the issue, we propose, design and implement a 3D prostrate ring-scanning equipment for NIR DOI with flexible combinations of illumination and detection, and with the function of radial, circular and vertical movement without hard compression of breast tissue like the imaging system using or incorporating with X-ray mammographic bench. Especially, a rotation-sliding-and-moving mechanism was designed for the guidance of source- and detection-channel movement. Following the previous justification for synthesized image reconstruction, in the paper the validation using varied phantoms is further conducted and 3D image reconstruction for their absorption and scattering coefficients is illustrated through the computation of our in-house coded schemes. The source and detection NIR data are acquired to reconstruct the 3D images through the operation of scanning bench in the movement of vertical, radial and circular directions. Rather than the fixed configuration, the addressed screening/diagnosing equipment has the flexibility for optical-channel expansion with a compromise among construction cost, operation time, and spatial resolution of reconstructed μa and μs' images.

  9. Clinical application of modern imaging technology: 3D information acquiring and image processing

    NASA Astrophysics Data System (ADS)

    Wang, Dezong

    1994-05-01

    In current clinic, pictures of B-supersonic, X-ray, X-CT and MRI are applicated widely. All of these are 2D pictures. The 3D information is blended. The blended information always leads doctors astray. If images are processed, mistakes will be reduced. In this paper the processing methods of 2D images are described. Examples of clinical applications are given. The acquiring methods of 3D information from 2D images are explained. The stereo image of liver and cancer is shown. The calculating ways of areas and volumes of liver and cancer are provided.

  10. Clinical application of 3D arterial spin-labeled brain perfusion imaging for Alzheimer disease: comparison with brain perfusion SPECT.

    PubMed

    Takahashi, H; Ishii, K; Hosokawa, C; Hyodo, T; Kashiwagi, N; Matsuki, M; Ashikaga, R; Murakami, T

    2014-05-01

    Alzheimer disease is the most common neurodegenerative disorder with dementia, and a practical and economic biomarker for diagnosis of Alzheimer disease is needed. Three-dimensional arterial spin-labeling, with its high signal-to-noise ratio, enables measurement of cerebral blood flow precisely without any extrinsic tracers. We evaluated the performance of 3D arterial spin-labeling compared with SPECT, and demonstrated the 3D arterial spin-labeled imaging characteristics in the diagnosis of Alzheimer disease. This study included 68 patients with clinically suspected Alzheimer disease who underwent both 3D arterial spin-labeling and SPECT imaging. Two readers independently assessed both images. Kendall W coefficients of concordance (K) were computed, and receiver operating characteristic analyses were performed for each reader. The differences between the images in regional perfusion distribution were evaluated by means of statistical parametric mapping, and the incidence of hypoperfusion of the cerebral watershed area, referred to as "borderzone sign" in the 3D arterial spin-labeled images, was determined. Readers showed K = 0.82/0.73 for SPECT/3D arterial spin-labeled imaging, and the respective areas under the receiver operating characteristic curve were 0.82/0.69 for reader 1 and 0.80/0.69 for reader 2. Statistical parametric mapping showed that the perisylvian and medial parieto-occipital perfusion in the arterial spin-labeled images was significantly higher than that in the SPECT images. Borderzone sign was observed on 3D arterial spin-labeling in 70% of patients misdiagnosed with Alzheimer disease. The diagnostic performance of 3D arterial spin-labeling and SPECT for Alzheimer disease was almost equivalent. Three-dimensional arterial spin-labeled imaging was more influenced by hemodynamic factors than was SPECT imaging. © 2014 by American Journal of Neuroradiology.

  11. High-resolution, low-dose phase contrast X-ray tomography for 3D diagnosis of human breast cancers

    PubMed Central

    Zhao, Yunzhe; Brun, Emmanuel; Coan, Paola; Huang, Zhifeng; Sztrókay, Aniko; Diemoz, Paul Claude; Liebhardt, Susanne; Mittone, Alberto; Gasilov, Sergei; Miao, Jianwei; Bravin, Alberto

    2012-01-01

    Mammography is the primary imaging tool for screening and diagnosis of human breast cancers, but ∼10–20% of palpable tumors are not detectable on mammograms and only about 40% of biopsied lesions are malignant. Here we report a high-resolution, low-dose phase contrast X-ray tomographic method for 3D diagnosis of human breast cancers. By combining phase contrast X-ray imaging with an image reconstruction method known as equally sloped tomography, we imaged a human breast in three dimensions and identified a malignant cancer with a pixel size of 92 μm and a radiation dose less than that of dual-view mammography. According to a blind evaluation by five independent radiologists, our method can reduce the radiation dose and acquisition time by ∼74% relative to conventional phase contrast X-ray tomography, while maintaining high image resolution and image contrast. These results demonstrate that high-resolution 3D diagnostic imaging of human breast cancers can, in principle, be performed at clinical compatible doses. PMID:23091003

  12. Intervertebral disc segmentation in MR images with 3D convolutional networks

    NASA Astrophysics Data System (ADS)

    Korez, Robert; Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2017-02-01

    The vertebral column is a complex anatomical construct, composed of vertebrae and intervertebral discs (IVDs) supported by ligaments and muscles. During life, all components undergo degenerative changes, which may in some cases cause severe, chronic and debilitating low back pain. The main diagnostic challenge is to locate the pain generator, and degenerated IVDs have been identified to act as such. Accurate and robust segmentation of IVDs is therefore a prerequisite for computer-aided diagnosis and quantification of IVD degeneration, and can be also used for computer-assisted planning and simulation in spinal surgery. In this paper, we present a novel fully automated framework for supervised segmentation of IVDs from three-dimensional (3D) magnetic resonance (MR) spine images. By considering global intensity appearance and local shape information, a landmark-based approach is first used for the detection of IVDs in the observed image, which then initializes the segmentation of IVDs by coupling deformable models with convolutional networks (ConvNets). For this purpose, a 3D ConvNet architecture was designed that learns rich high-level appearance representations from a training repository of IVDs, and then generates spatial IVD probability maps that guide deformable models towards IVD boundaries. By applying the proposed framework to 15 3D MR spine images containing 105 IVDs, quantitative comparison of the obtained against reference IVD segmentations yielded an overall mean Dice coefficient of 92.8%, mean symmetric surface distance of 0.4 mm and Hausdorff surface distance of 3.7 mm.

  13. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  14. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  15. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  16. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    NASA Astrophysics Data System (ADS)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  17. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  18. Imaging 3D strain field monitoring during hydraulic fracturing processes

    NASA Astrophysics Data System (ADS)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  19. Digital diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu

    2001-08-01

    The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.

  20. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  1. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    NASA Astrophysics Data System (ADS)

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  2. 3D dentofacial photogrammetry reference values: a novel approach to orthodontic diagnosis.

    PubMed

    Masoud, Mohamed I; Bansal, Neetu; C Castillo, Jose; Manosudprasit, Amornrut; Allareddy, Veerasathpurush; Haghi, Arshan; Hawkins, Hannah C; Otárola-Castillo, Erik

    2017-04-01

    Orthodontic diagnostic standards generally use the cranial base as a reference and rely on samples selected by orthodontists. The purpose of this study was to provide male and female standards for a novel non-radiographic approach for orthodontic diagnosis that utilizes 3D dentofacial photogrammetry using the eyes and natural head orientation as references instead of the cranial base. One hundred and eighty females and 200 males between the ages of 18 and 35 years from 2 modeling agencies were orthodontically screened for near ideal occlusion. Subjects that met the inclusion criteria were rated by a sample of 40 lay people for attractiveness on a visual analogue scale. The final sample that had 3D facial and dental imaging included 49 subjects 25 males and 24 females with near ideal occlusion and considered attractive by the public. Inter and Intra-examiner ICC were greater than 0.8 for both landmarking and indexing. Relative to a coronal plane contacting the pupils (MC), the mean sagittal position of the alar curvature (representing the nasomaxillary complex) was 14.36 ± 3.08 mm in males and 12.4 ± 3.58 mm in females. The sagittal position of soft tissue pogonion relative to the pupils was 14.84 ± 3.63 mm in males and 12.78 ± 5.68 mm in females. The angle between the alar curvature and pogonion relative to the pupils was 9° in males and 10° in females. With the exception of the occlusal plane which was steeper in females, no ratios or angular facial measurements showed a significant gender difference. Relative to MC, males had more proclined upper incisors (20° vs 16°) and more retroclined Lower incisors (27° vs 31°; P > 0.05). A Procrustes ANOVA and permutation test showed that the shapes of males and females are different enough to be considered two distinct populations. 1. When using the proposed method for orthodontic diagnosis, male and female patients should be compared to their respective dentofacial standards. 2. Validation of the proposed method

  3. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  4. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  5. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  6. Monocular 3D display unit using soft actuator for parallax image shift

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  7. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  8. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  9. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  10. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  11. Monopulse radar 3-D imaging and application in terminal guidance radar

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Qin, Guodong; Zhang, Lina

    2007-11-01

    Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate μ and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

  12. Role of 3D power Doppler sonography in early prenatal diagnosis of Galen vein aneurysm

    PubMed Central

    Ergenoğlu, Mete Ahmet; Yeniel, Ahmet Özgür; Akdemir, Ali; Akercan, Fuat; Karadadaş, Nedim

    2013-01-01

    Vein of Galen aneurysm malformation (VGAM) is a rare congenital vascular anomaly. Although the cause of VGAM remains to be elucidated, the current hypothesis is persistence of the embryonic vascular supply, which leads to progressive enlargement and formation of the aneurysmal component of a typical VGAM. Here, we present a 36-year-old woman at 23 weeks’ gestation (gravida 3, para 2) who was evaluated using 3D power Doppler sonography for the prenatal diagnosis of a vein of Galen aneurysm. Investigation using 3D power Doppler sonography allowed for a non-invasive yet diffuse and detailed prenatal assessment of VGAM. Thus, we suggest that prenatal sonography with 3D power Doppler may be an option in cases of VGAM. PMID:24592100

  13. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions

    PubMed Central

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-01-01

    Abstract Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures. Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures. The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced. Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  14. Volumetric medical image compression using 3D listless embedded block partitioning.

    PubMed

    Senapati, Ranjan K; Prasad, P M K; Swain, Gandharba; Shankar, T N

    2016-01-01

    This paper presents a listless variant of a modified three-dimensional (3D)-block coding algorithm suitable for medical image compression. A higher degree of correlation is achieved by using a 3D hybrid transform. The 3D hybrid transform is performed by a wavelet transform in the spatial dimension and a Karhunen-Loueve transform in the spectral dimension. The 3D transformed coefficients are arranged in a one-dimensional (1D) fashion, as in the hierarchical nature of the wavelet-coefficient distribution strategy. A novel listless block coding algorithm is applied to the mapped 1D coefficients which encode in an ordered-bit-plane fashion. The algorithm originates from the most significant bit plane and terminates at the least significant bit plane to generate an embedded bit stream, as in 3D-SPIHT. The proposed algorithm is called 3D hierarchical listless block (3D-HLCK), which exhibits better compression performance than that exhibited by 3D-SPIHT. Further, it is highly competitive with some of the state-of-the-art 3D wavelet coders for a wide range of bit rates for magnetic resonance, digital imaging and communication in medicine and angiogram images. 3D-HLCK provides rate and resolution scalability similar to those provided by 3D-SPIHT and 3D-SPECK. In addition, a significant memory reduction is achieved owing to the listless nature of 3D-HLCK.

  15. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization.

    PubMed

    Sato, Y; Nakamoto, M; Tamaki, Y; Sasama, T; Sakita, I; Nakajima, Y; Monden, M; Tamura, S

    1998-10-01

    This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.

  16. 3D Image Reconstructions and the Nyquist-Shannon Theorem

    NASA Astrophysics Data System (ADS)

    Ficker, T.; Martišek, D.

    2015-09-01

    Fracture surfaces are occasionally modelled by Fourier's two-dimensional series that can be converted into digital 3D reliefs mapping the morphology of solid surfaces. Such digital replicas may suffer from various artefacts when processed inconveniently. Spatial aliasing is one of those artefacts that may devalue Fourier's replicas. According to the Nyquist-Shannon sampling theorem the spatial aliasing occurs when Fourier's frequencies exceed the Nyquist critical frequency. In the present paper it is shown that the Nyquist frequency is not the only critical limit determining aliasing artefacts but there are some other frequencies that intensify aliasing phenomena and form an infinite set of points at which numerical results abruptly and dramatically change their values. This unusual type of spatial aliasing is explored and some consequences for 3D computer reconstructions are presented.

  17. "Phase-Enhanced" 3D Snapshot ISAR Imaging and Interferometric SAR

    DTIC Science & Technology

    2009-12-28

    document when it is no longer needed. Massachusetts Institute of Technology Lincoln Laboratory ttPhase-Lnhanced,, 3D Snapshot ISAR Imaging and...inverse synthetie aperture radar ( ISAR ) images based on recent developments in high resolution spectral estimation theory. Because this technique requires...the radar sensor. This report develops a framework based on 3D snapshot imaging of sparse angle sectors of ISAR data, extending these results to 3D

  18. Benchmark datasets for 3D MALDI- and DESI-imaging mass spectrometry.

    PubMed

    Oetjen, Janina; Veselkov, Kirill; Watrous, Jeramie; McKenzie, James S; Becker, Michael; Hauberg-Lotte, Lena; Kobarg, Jan Hendrik; Strittmatter, Nicole; Mróz, Anna K; Hoffmann, Franziska; Trede, Dennis; Palmer, Andrew; Schiffler, Stefan; Steinhorst, Klaus; Aichler, Michaela; Goldin, Robert; Guntinas-Lichius, Orlando; von Eggeling, Ferdinand; Thiele, Herbert; Maedler, Kathrin; Walch, Axel; Maass, Peter; Dorrestein, Pieter C; Takats, Zoltan; Alexandrov, Theodore

    2015-01-01

    Three-dimensional (3D) imaging mass spectrometry (MS) is an analytical chemistry technique for the 3D molecular analysis of a tissue specimen, entire organ, or microbial colonies on an agar plate. 3D-imaging MS has unique advantages over existing 3D imaging techniques, offers novel perspectives for understanding the spatial organization of biological processes, and has growing potential to be introduced into routine use in both biology and medicine. Owing to the sheer quantity of data generated, the visualization, analysis, and interpretation of 3D imaging MS data remain a significant challenge. Bioinformatics research in this field is hampered by the lack of publicly available benchmark datasets needed to evaluate and compare algorithms. High-quality 3D imaging MS datasets from different biological systems at several labs were acquired, supplied with overview images and scripts demonstrating how to read them, and deposited into MetaboLights, an open repository for metabolomics data. 3D imaging MS data were collected from five samples using two types of 3D imaging MS. 3D matrix-assisted laser desorption/ionization imaging (MALDI) MS data were collected from murine pancreas, murine kidney, human oral squamous cell carcinoma, and interacting microbial colonies cultured in Petri dishes. 3D desorption electrospray ionization (DESI) imaging MS data were collected from a human colorectal adenocarcinoma. With the aim to stimulate computational research in the field of computational 3D imaging MS, selected high-quality 3D imaging MS datasets are provided that could be used by algorithm developers as benchmark datasets.

  19. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  20. Segmentation of Skin Tumors in High-Frequency 3-D Ultrasound Images.

    PubMed

    Sciolla, Bruno; Cowell, Lester; Dambry, Thibaut; Guibert, Benoît; Delachartre, Philippe

    2017-01-01

    High-frequency 3-D ultrasound imaging is an informative tool for diagnosis, surgery planning and skin lesion examination. The purpose of this article was to describe a semi-automated segmentation tool providing easy access to the extent, shape and volume of a lesion. We propose an adaptive log-likelihood level-set segmentation procedure using non-parametric estimates of the intensity distribution. The algorithm has a single parameter to control the smoothness of the contour, and we describe how a fixed value yields satisfactory segmentation results with an average Dice coefficient of D = 0.76. The algorithm is implemented on a grid, which increases the speed by a factor of 100 compared with a standard pixelwise segmentation. We compare the method with parametric methods making the hypothesis of Rayleigh or Nakagami distributed signals, and illustrate that our method has greater robustness with similar computational speed. Benchmarks are made on realistic synthetic ultrasound images and a data set of nine clinical 3-D images acquired with a 50-MHz imaging system. The proposed algorithm is suitable for use in a clinical context as a post-processing tool.

  1. 3D imaging of myocardial perfusion and coronary tree morphology from a single rotational angiogram

    NASA Astrophysics Data System (ADS)

    Lauritsch, Günter; Rohkohl, Christopher; Hornegger, Joachim; Sinha, Anil-Martin; Brachmann, Johannes; Rieber, Johannes; Rittger, Harald

    2011-03-01

    Diagnosis and treatment of coronary heart disease are performed in the catheter laboratory using an angiographic X-ray C-arm system. The morphology of the coronary tree and potentially ischemic lesions are determined in 2D projection views. The hemodynamic impact of the lesion would be valuable information for treatment decision. Using other modalities for functional imaging is disrupting the clinical workflow since the patient has to be transferred from the catheter laboratory to another scanner, and back to the catheter laboratory for performing the treatment. In this work a novel technology is used for simultaneous 3D imaging of first pass perfusion and the morphology of the coronary tree from a single rotational angiogram. A selective, single shot of contrast agent of less than 20ml directly into the coronaries is sufficient for a proper contrast resolution. Due to the long acquisition time cardiac motion has to be considered. A novel reconstruction technique for estimation and compensation of cardiac motion from the acquired projection data is used. The overlay of the 3D structure of the coronary tree and the perfusion image shows the correlation of myocardial areas and the associated coronary sections supporting that region. In a case example scar lesions caused by a former myocardial infarct are investigated. A first pass perfusion defect is found which is validated by a late enhancement magnetic resonance image. No ischemic defects are found. The non vital regions are still supported by the coronary vasculature.

  2. Proximal pulmonary vein stenosis detection in pediatric patients: value of multiplanar and 3-D VR imaging evaluation.

    PubMed

    Lee, Edward Y; Jenkins, Kathy J; Muneeb, Muhammad; Marshall, Audrey C; Tracy, Donald A; Zurakowski, David; Boiselle, Phillip M

    2013-08-01

    One of the important benefits of using multidetector computed tomography (MDCT) is its capability to generate high-quality two-dimensional (2-D) multiplanar (MPR) and three-dimensional (3-D) images from volumetric and isotropic axial CT data. However, to the best of our knowledge, no results have been published on the potential diagnostic role of multiplanar and 3-D volume-rendered (VR) images in detecting pulmonary vein stenosis, a condition in which MDCT has recently assumed a role as the initial noninvasive imaging modality of choice. The purpose of this study was to compare diagnostic accuracy and interpretation time of axial, multiplanar and 3-D VR images for detection of proximal pulmonary vein stenosis in children, and to assess the potential added diagnostic value of multiplanar and 3-D VR images. We used our hospital information system to identify all consecutive children (< 18 years of age) with proximal pulmonary vein stenosis who had both a thoracic MDCT angiography study and a catheter-based conventional angiography within 2 months from June 2005 to February 2012. Two experienced pediatric radiologists independently reviewed each MDCT study for the presence of proximal pulmonary vein stenosis defined as ≥ 50% of luminal narrowing on axial, multiplanar and 3-D VR images. Final diagnosis was confirmed by angiographic findings. Diagnostic accuracy was compared using the z-test. Confidence level of diagnosis (scale 1-5, 5 = highest), perceived added diagnostic value (scale 1-5, 5 = highest), and interpretation time of multiplanar or 3-D VR images were compared using paired t-tests. Interobserver agreement was measured using the chance-corrected kappa coefficient. The final study population consisted of 28 children (15 boys and 13 girls; mean age: 5.2 months). Diagnostic accuracy based on 116 individual pulmonary veins for detection of proximal pulmonary vein stenosis was 72.4% (84 of 116) for axial MDCT images, 77.5% (90 of 116 cases) for

  3. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  4. The application of transabdominal 3D ultrasound for the diagnosis of gastric varices: a preliminary study.

    PubMed

    Maruyama, Hitoshi; Kamezaki, Hidehiro; Kondo, Takayuki; Sekimoto, Tadashi; Shimada, Taro; Takahashi, Masanori; Yokosuka, Osamu

    2013-09-01

    The aim of this study was to determine the feasibility of using transabdominal three-dimensional (3D) colour Doppler ultrasound as a non-invasive tool to demonstrate and quantify gastric varices. A phantom study compared the 3D water flow volume data in a hose with the actual volume inside the hose at three different flow velocities. The prospective clinical study examined the reliability and reproducibility of 3D volume data for gastric varices (mild 28, moderate 26, large 8) in 62 patients. The 3D images were acquired using the colour Doppler with both convex and micro-convex probes. The phantom study showed a 12.4-17.6% difference between the 3D data and the actual volume with no difference between the two types of probes or three velocities. The detectability of gastric varices was identical between the two probes (54/62, 87.1%). However, the scanning efficiency was significantly greater for the micro-convex probe (66.9 ± 14.1%) than the convex probe (57.3 ± 14%, p=0.012). Body mass index was the only factor that had a significant relationship with the detectability of varices. The mean volume (mL) of the 3D signal was 0.82 ± 0.74 for mild varices, 5.48 ± 3.84 for moderate varices, and 10.63 ± 6.67 for large varices with significant differences between different grades. The intra-/inter-rater reliability was excellent. The method of 3D colour Doppler ultrasound is reliable and reproducible in the quantitative assessment of vascular volume and is applicable for grading gastric varices. This study may offer a practical usefulness for 3D ultrasonography as an alternative to endoscopy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    PubMed

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  6. Raman molecular chemical imaging: 3D Raman using deconvolution

    NASA Astrophysics Data System (ADS)

    Maier, John S.; Treado, Patrick J.

    2004-12-01

    Chemical imaging is a powerful technique combining molecular spectroscopy and digital imaging for rapid, non-invasive and reagentless analysis of materials, including biological cells and tissues. Raman chemical imaging is suited to the characterization of molecular composition and structure of biomateials at submicron spatial resolution (< 250 nm). As a result, Raman imaging has potential as a routine tool for the assessment of cells and subcellular components. In this presentation, we discuss Raman chemical imaging and spectroscopy of single human cells obtained from a culture line. Rapid three dimensional Raman imaging is shown using deconvolution to improve image quality.

  7. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  8. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  9. Segmentation of the common carotid artery with active shape models from 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Jin, Jiaoying; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2012-03-01

    Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, we develop and evaluate a new segmentation method for outlining both lumen and adventitia (inner and outer walls) of common carotid artery (CCA) from three-dimensional ultrasound (3D US) images for carotid atherosclerosis diagnosis and evaluation. The data set consists of sixty-eight, 17× 2× 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80mg atorvastain and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. We investigate the use of Active Shape Models (ASMs) to segment CCA inner and outer walls after statin therapy. The proposed method was evaluated with respect to expert manually outlined boundaries as a surrogate for ground truth. For the lumen and adventitia segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 93.6%+/- 2.6%, 91.8%+/- 3.5%, mean absolute distances (MAD) of 0.28+/- 0.17mm and 0.34 +/- 0.19mm, maximum absolute distances (MAXD) of 0.87 +/- 0.37mm and 0.74 +/- 0.49mm. The proposed algorithm took 4.4 +/- 0.6min to segment a single 3D US images, compared to 11.7+/-1.2min for manual segmentation. Therefore, the method would promote the translation of carotid 3D US to clinical care for the fast, safety and economical monitoring of the atherosclerotic disease progression and regression during therapy.

  10. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  11. Fast fully 3-D image reconstruction in PET using planograms.

    PubMed

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15.

  12. Space Radar Image of Mammoth, California in 3-D

    NASA Image and Video Library

    1999-01-27

    This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a NASA Spaceborne Imaging Radar-C SIR-C radar image on a U.S. Geological Survey digital elevation map.

  13. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  14. 3-D Velocity Measurement of Natural Convection Using Image Processing

    NASA Astrophysics Data System (ADS)

    Shinoki, Masatoshi; Ozawa, Mamoru; Okada, Toshifumi; Kimura, Ichiro

    This paper describes quantitative three-dimensional measurement method for flow field of a rotating Rayleigh-Benard convection in a cylindrical cell heated below and cooled above. A correlation method for two-dimensional measurement was well advanced to a spatio-temporal correlation method. Erroneous vectors, often appeared in the correlation method, was successfully removed using Hopfield neural network. As a result, calculated 3-D velocity vector distribution well corresponded to the observed temperature distribution. Consequently, the simultaneous three-dimensional measurement system for temperature and flow field was developed.

  15. Interpolation of 3-D binary images based on morphological skeletonization.

    PubMed

    Chatzis, V; Pitas, I

    2000-07-01

    In this paper, the morphological skeleton interpolation (MSI) algorithm is presented. It is an efficient, shape-based interpolation method used for interpolating slices in a three-dimensional (3-D) binary object. It is based on morphological skeletonization, which is used for two-dimensional (2-D) slice representation. The proposed morphological skeleton matching process provides translation, rotation, and scaling information at the same time. The interpolated slices preserve the shape of the original object slices, when the slices have similar shapes. It can also modify the shape of an object when the successive slices do not have similar shapes. Applications on artificial and real data are also presented.

  16. 3-D imaging in post-traumatic malformation and eruptive disturbance in permanent incisors: a case report.

    PubMed

    Sahai, Sharad; Kaveriappa, Sushma; Arora, Honey; Aggarwal, Bharat

    2011-12-01

    Injury to the primary dentition is one of the common problems of childhood. Disturbances during crown development of the permanent teeth result in morphologic alterations. This case report highlights the role of 3-D imaging when conventional dental radiographs are not enough to answer our clinical questions regarding future eruptive disturbances. 3-D imaging can many times give us a definitive diagnosis and improve the treatment planning after early injuries in the deciduous dentition. The current status of multislice computed tomography (CT) and cone beam CT (CBCT) as diagnostic tools in pediatric dental population is also discussed briefly.

  17. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  18. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  19. 3D fluorescence anisotropy imaging using selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-01-01

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  20. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    NASA Astrophysics Data System (ADS)

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  1. Prostate cancer diagnosis using deep learning with 3D multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Liu, Saifeng; Zheng, Huaixiu; Feng, Yesu; Li, Wei

    2017-03-01

    A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.

  2. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  3. 3-D segmentation of articular cartilages by graph cuts using knee MR images from osteoarthritis initiative

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae

    2008-03-01

    Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.

  4. Assessment of anterior cruciate ligament reconstruction using 3D ultrashort echo-time MR imaging.

    PubMed

    Rahmer, Jürgen; Börnert, Peter; Dries, Sebastian P M

    2009-02-01

    This work demonstrates the potential of ultrashort TE (UTE) imaging for visualizing graft material and fixation elements after surgical repair of soft tissue trauma such as ligament or meniscal injury. Three asymptomatic patients with anterior cruciate ligament (ACL) reconstruction using different graft fixation methods were imaged at 1.5T using a 3D UTE sequence. Conventional multislice turbo spin-echo (TSE) measurements were performed for comparison. 3D UTE imaging yields high signal from tendon graft material at isotropic spatial resolution, thus facilitating direct positive contrast graft visualization. Furthermore, metal and biopolymer graft fixation elements are clearly depicted due to the high contrast between the signal-void implants and the graft material. Thus, the ability of UTE MRI to visualize short-T(2) tissues such as tendons, ligaments, or tendon grafts can provide additional information about the status of the graft and its fixation in the situation after cruciate ligament repair. UTE MRI can therefore potentially support diagnosis when problems occur or persist after surgical procedures involving short-T(2) tissues and implants.

  5. D3D augmented reality imaging system: proof of concept in mammography.

    PubMed

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  6. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  7. Dual-Color 3D Superresolution Microscopy by Combined Spectral-Demixing and Biplane Imaging

    PubMed Central

    Winterflood, Christian M.; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-01-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. PMID:26153696

  8. Diattenuation of brain tissue and its impact on 3D polarized light imaging.

    PubMed

    Menzel, Miriam; Reckfort, Julia; Weigand, Daniel; Köse, Hasan; Amunts, Katrin; Axer, Markus

    2017-07-01

    3D-polarized light imaging (3D-PLI) reconstructs nerve fibers in histological brain sections by measuring their birefringence. This study investigates another effect caused by the optical anisotropy of brain tissue - diattenuation. Based on numerical and experimental studies and a complete analytical description of the optical system, the diattenuation was determined to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation effect has negligible impact on the fiber orientations derived by 3D-PLI. The diattenuation signal, however, was found to highlight different anatomical structures that cannot be distinguished with current imaging techniques, which makes Diattenuation Imaging a promising extension to 3D-PLI.

  9. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  10. Fast non local means denoising for 3D MR images.

    PubMed

    Coupé, Pierrick; Yger, Pierre; Barillot, Christian

    2006-01-01

    One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.

  11. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  12. 3D integral imaging using diffractive Fresnel lens arrays.

    PubMed

    Hain, Mathias; von Spiegel, Wolff; Schmiedchen, Marc; Tschudi, Theo; Javidi, Bahram

    2005-01-10

    We present experimental results with binary amplitude Fresnel lens arrays and binary phase Fresnel lens arrays used to implement integral imaging systems. Their optical performance is compared with high quality refractive microlens arrays and pinhole arrays in terms of image quality, color distortion and contrast. Additionally, we show the first experimental results of lens arrays with different focal lengths in integral imaging, and discuss their ability to simultaneously increase both the depth of focus and the field of view.

  13. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  14. Efficient RPG detection in noisy 3D image data

    NASA Astrophysics Data System (ADS)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  15. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    DTIC Science & Technology

    2008-01-01

    D. J. Hawkes, "Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery ," IEEE Transactions on Information...guided minimally invasive surgery ," Surgical Innovation, (in preparation), 2008. • O. Dandekar, W. Plishker, S. S. Bhattacharyya, and R. Shekhar... surgeries , biopsies, and therapies, have the potential to improve patient care by enabling new and faster procedures, minimizing unintended damage

  16. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1986-03-10

    Severe Clutter .... ........ 1I-i III . Optical Implementation of the HopfieldModel .I -? .- . ." Model........................ . . BY...can be employed in future broad-band imaging radar networks capable of providing 3-D projective or . - tomographic images of remote aerospace targets...We expect the results of this effort to tell us how to achieve centimeter resolution on remote aerospace objects cost-effectively using microwave

  17. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  18. 3-D Image of Grooves and Wrinkles in the South Polar Region

    NASA Image and Video Library

    2011-09-27

    NASA Dawn spacecraft obtained this 3-D image of asteroid Vesta with its framing camera on Aug. 23 and 28, 2011 at a distance of 1,700 miles 2,740 kilometers. You will need 3D glasses to view this image.

  19. A review of 3D/2D registration methods for image-guided interventions.

    PubMed

    Markelj, P; Tomaževič, D; Likar, B; Pernuš, F

    2012-04-01

    Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.

  20. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    PubMed

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p < 0.05). Assigned from the morphology of patients' knees, the PSI represents the convergence of congruent designs with current personalized treatment tools. The PSI can achieve less extremity alignment and greater accuracy of prosthesis implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  1. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  2. A single-imager, single-lens video camera prototype for 3D imaging

    NASA Astrophysics Data System (ADS)

    Christopher, Lauren A.; Li, Weixu

    2012-03-01

    A new method for capturing 3D video from a single imager and lens is introduced. The benefit of this method is that it does not have the calibration and alignment issues associated with binocular 3D video cameras. It also does not require special ranging transmitters and sensors. Because it is a single lens/imager system, it is also less expensive than either the binocular or ranging cameras. Our system outputs a 2D image and associated depth image using the combination of microfluidic lens and Depth from Defocus (DfD) algorithm. The lens is capable of changing the focus to obtain two images at the normal video frame rate. The Depth from Defocus algorithm uses the in focus and out of focus images to infer depth. We performed our experiments on synthetic and on the real aperture CMOS imager with a microfluidic lens. On synthetic images, we found an improvement in mean squared error compared to the literature on a limited test set. On camera images, our research showed that DfD combined with edge detection and segmentation provided subjective improvements in the resulting depth images.

  3. Bi-sided integral imaging with 2D/3D convertibility using scattering polarizer.

    PubMed

    Yeom, Jiwoon; Hong, Keehoon; Park, Soon-gi; Hong, Jisoo; Min, Sung-Wook; Lee, Byoungho

    2013-12-16

    We propose a two-dimensional (2D) and three-dimensional (3D) convertible bi-sided integral imaging. The proposed system uses the polarization state of projected light for switching its operation mode between 2D and 3D modes. By using an optical module composed of two scattering polarizers and one linear polarizer, the proposed integral imaging system simultaneously provides 3D images with 2D background images for observers who are located in the front and the rear sides of the system. The occlusion effect between 2D images and 3D images is realized by using a compensation mask for 2D images and the elemental images. The principle of proposed system is experimentally verified.

  4. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  5. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  6. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler im