Science.gov

Sample records for 3d segmentation algorithm

  1. Interactive algorithms for the segmentation and quantitation of 3-D MRI brain scans.

    PubMed

    Freeborough, P A; Fox, N C; Kitney, R I

    1997-05-01

    Interactive algorithms are an attractive approach to the accurate segmentation of 3D brain scans as they potentially improve the reliability of fully automated segmentation while avoiding the labour intensiveness and inaccuracies of manual segmentation. We present a 3D image analysis package (MIDAS) with a novel architecture enabling highly interactive segmentation algorithms to be implemented as add on modules. Interactive methods based on intensity thresholding, region growing and the constrained application of morphological operators are also presented. The methods involve the application of constraints and freedoms on the algorithms coupled with real time visualisation of the effect. This methodology has been applied to the segmentation, visualisation and measurement of the whole brain and a small irregular neuroanatomical structure, the hippocampus. We demonstrate reproducible and anatomically accurate segmentations of these structures. The efficacy of one method in measuring volume loss (atrophy) of the hippocampus in Alzheimer's disease is shown and is compared to conventional methods.

  2. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  3. A Segmentation Algorithm for X-ray 3D Angiography and Vessel Catheterization

    SciTech Connect

    Franchi, Danilo; Rosa, Luigi; Placidi, Giuseppe

    2008-11-06

    Vessel Catheterization is a clinical procedure usually performed by a specialist by means of X-ray fluoroscopic guide with contrast-media. In the present paper, we present a simple and efficient algorithm for vessel segmentation which allows vessel separation and extraction from the background (noise and signal coming from other organs). This would reduce the number of projections (X-ray scans) to reconstruct a complete and accurate 3D vascular model and the radiological risk, in particular for the patient. In what follows, the algorithm is described and some preliminary experimental results are reported illustrating the behaviour of the proposed method.

  4. 3-D Ultrasound Segmentation of the Placenta Using the Random Walker Algorithm: Reliability and Agreement.

    PubMed

    Stevenson, Gordon N; Collins, Sally L; Ding, Jane; Impey, Lawrence; Noble, J Alison

    2015-12-01

    Volumetric segmentation of the placenta using 3-D ultrasound is currently performed clinically to investigate correlation between organ volume and fetal outcome or pathology. Previously, interpolative or semi-automatic contour-based methodologies were used to provide volumetric results. We describe the validation of an original random walker (RW)-based algorithm against manual segmentation and an existing semi-automated method, virtual organ computer-aided analysis (VOCAL), using initialization time, inter- and intra-observer variability of volumetric measurements and quantification accuracy (with respect to manual segmentation) as metrics of success. Both semi-automatic methods require initialization. Therefore, the first experiment compared initialization times. Initialization was timed by one observer using 20 subjects. This revealed significant differences (p < 0.001) in time taken to initialize the VOCAL method compared with the RW method. In the second experiment, 10 subjects were used to analyze intra-/inter-observer variability between two observers. Bland-Altman plots were used to analyze variability combined with intra- and inter-observer variability measured by intra-class correlation coefficients, which were reported for all three methods. Intra-class correlation coefficient values for intra-observer variability were higher for the RW method than for VOCAL, and both were similar to manual segmentation. Inter-observer variability was 0.94 (0.88, 0.97), 0.91 (0.81, 0.95) and 0.80 (0.61, 0.90) for manual, RW and VOCAL, respectively. Finally, a third observer with no prior ultrasound experience was introduced and volumetric differences from manual segmentation were reported. Dice similarity coefficients for observers 1, 2 and 3 were respectively 0.84 ± 0.12, 0.94 ± 0.08 and 0.84 ± 0.11, and the mean was 0.87 ± 0.13. The RW algorithm was found to provide results concordant with those for manual segmentation and to outperform VOCAL in aspects of observer

  5. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms.

    PubMed

    Landesberger, Tatiana von; Basgier, Dennis; Becker, Meike

    2016-12-01

    The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.

  6. Streaming level set algorithm for 3D segmentation of confocal microscopy images.

    PubMed

    Gouaillard, Alexandre; Mosaliganti, Kishore; Gelas, Arnaud; Souhait, Lydie; Obholzer, Nikolaus; Megason, Sean

    2009-01-01

    We present a high performance variant of the popular geodesic active contours which are used for splitting cell clusters in microscopy images. Previously, we implemented a linear pipelined version that incorporates as many cues as possible into developing a suitable level-set speed function so that an evolving contour exactly segments a cell/nuclei blob. We use image gradients, distance maps, multiple channel information and a shape model to drive the evolution. We also developed a dedicated seeding strategy that uses the spatial coherency of the data to generate an over complete set of seeds along with a quality metric which is further used to sort out which seed should be used for a given cell. However, the computational performance of any level-set methodology is quite poor when applied to thousands of 3D data-sets each containing thousands of cells. Those data-sets are common in confocal microscopy. In this work, we explore methods to stream the algorithm in shared memory, multi-core environments. By partitioning the input and output using spatial data structures we insure the spatial coherency needed by our seeding algorithm as well as improve drastically the speed without memory overhead. Our results show speed-ups up to a factor of six.

  7. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  8. A fully-automatic locally adaptive thresholding algorithm for blood vessel segmentation in 3D digital subtraction angiography.

    PubMed

    Boegel, Marco; Hoelter, Philip; Redel, Thomas; Maier, Andreas; Hornegger, Joachim; Doerfler, Arnd

    2015-01-01

    Subarachnoid hemorrhage due to a ruptured cerebral aneurysm is still a devastating disease. Planning of endovascular aneurysm therapy is increasingly based on hemodynamic simulations necessitating reliable vessel segmentation and accurate assessment of vessel diameters. In this work, we propose a fully-automatic, locally adaptive, gradient-based thresholding algorithm. Our approach consists of two steps. First, we estimate the parameters of a global thresholding algorithm using an iterative process. Then, a locally adaptive version of the approach is applied using the estimated parameters. We evaluated both methods on 8 clinical 3D DSA cases. Additionally, we propose a way to select a reference segmentation based on 2D DSA measurements. For large vessels such as the internal carotid artery, our results show very high sensitivity (97.4%), precision (98.7%) and Dice-coefficient (98.0%) with our reference segmentation. Similar results (sensitivity: 95.7%, precision: 88.9% and Dice-coefficient: 90.7%) are achieved for smaller vessels of approximately 1mm diameter.

  9. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  10. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  11. Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images

    NASA Astrophysics Data System (ADS)

    Price, Gareth; Moore, Chris

    2007-03-01

    Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy and correct target identification is a necessity. Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such techniques can be applied to organ at risk and tumour segmentation in radiotherapy. In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent, especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape model, provides guidance. A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes ) prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer variability.

  12. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  13. Freehand 3D ultrasound breast tumor segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Ge, Yinan; Ou, Yue; Cao, Biao

    2007-12-01

    It is very important for physicians to accurately determine breast tumor location, size and shape in ultrasound image. The precision of breast tumor volume quantification relies on the accurate segmentation of the images. Given the known location and orientation of the ultrasound probe, We propose using freehand three dimensional (3D) ultrasound to acquire original images of the breast tumor and the surrounding tissues in real-time, after preprocessing with anisotropic diffusion filtering, the segmentation operation is performed slice by slice based on the level set method in the image stack. For the segmentation on each slice, the user can adjust the parameters to fit the requirement in the specified image in order to get the satisfied result. By the quantification procedure, the user can know the tumor size varying in different images in the stack. Surface rendering and interpolation are used to reconstruct the 3D breast tumor image. And the breast volume is constructed by the segmented contours in the stack of images. After the segmentation, the volume of the breast tumor in the 3D image data can be obtained.

  14. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  15. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    SciTech Connect

    Qiu Wu; Yuchi Ming; Ding Mingyue; Tessier, David; Fenster, Aaron

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  16. MRCK_3D contact detonation algorithm

    SciTech Connect

    Rougier, Esteban; Munjiza, Antonio

    2010-01-01

    Large-scale Combined Finite-Discrete Element Methods (FEM-DEM) and Discrete Element Methods (DEM) simulations involving contact of a large number of separate bod ies need an efficient, robust and flexible contact detection algorithm. In this work the MRCK-3D search algorithm is outlined and its main CPU perfonnances are evaluated. One of the most important aspects of this newly developed search algorithm is that it is applicable to systems consisting of many bodies of different shapes and sizes.

  17. Unsupervised noise removal algorithms for 3-D confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Roysam, Badrinath; Bhattacharjya, Anoop K.; Srinivas, Chukka; Szarowski, Donald H.; Turner, James N.

    1992-06-01

    Fast algorithms are presented for effective removal of the noise artifact in 3-D confocal fluorescence microscopy images of extended spatial objects such as neurons. The algorithms are unsupervised in the sense that they automatically estimate and adapt to the spatially and temporally varying noise level in the microscopy data. An important feature of the algorithms is the fact that a 3-D segmentation of the field emerges jointly with the intensity estimate. The role of the segmentation is to limit any smoothing to the interiors of regions and hence avoid the blurring that is associated with conventional noise removal algorithms. Fast computation is achieved by parallel computation methods, rather than by algorithmic or modelling compromises. The noise-removal proceeds iteratively, starting from a set of approximate user- supplied, or default initial guesses of the underlying random process parameters. An expectation maximization algorithm is used to obtain a more precise characterization of these parameters, that are then input to a hierarchical estimation algorithm. This algorithm computes a joint solution of the related problems corresponding to intensity estimation, segmentation, and boundary-surface estimation subject to a combination of stochastic priors and syntactic pattern constraints. Three-dimensional stereoscopic renderings of processed 3-D images of murine hippocampal neurons are presented to demonstrate the effectiveness of the method. The processed images exhibit increased contrast and significant smoothing and reduction of the background intensity while avoiding any blurring of the neuronal structures.

  18. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  19. An adaptive 3D region growing algorithm to automatically segment and identify thoracic aorta and its centerline using computed tomography angiography scans

    NASA Astrophysics Data System (ADS)

    Ferreira, F.; Dehmeshki, J.; Amin, H.; Dehkordi, M. E.; Belli, A.; Jouannic, A.; Qanadli, S.

    2010-03-01

    Thoracic Aortic Aneurysm (TAA) is a localized swelling of the thoracic aorta. The progressive growth of an aneurysm may eventually cause a rupture if not diagnosed or treated. This necessitates the need for an accurate measurement which in turn calls for the accurate segmentation of the aneurysm regions. Computer Aided Detection (CAD) is a tool to automatically detect and segment the TAA in the Computer tomography angiography (CTA) images. The fundamental major step of developing such a system is to develop a robust method for the detection of main vessel and measuring its diameters. In this paper we propose a novel adaptive method to simultaneously segment the thoracic aorta and to indentify its center line. For this purpose, an adaptive parametric 3D region growing is proposed in which its seed will be automatically selected through the detection of the celiac artery and the parameters of the method will be re-estimated while the region is growing thorough the aorta. At each phase of region growing the initial center line of aorta will also be identified and modified through the process. Thus the proposed method simultaneously detect aorta and identify its centerline. The method has been applied on CT images from 20 patients with good agreement with the visual assessment by two radiologists.

  20. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  1. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  2. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  3. 3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts

    NASA Astrophysics Data System (ADS)

    Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu

    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.

  4. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.

  5. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  6. 3D Clumped Cell Segmentation Using Curvature Based Seeded Watershed

    PubMed Central

    Atta-Fosu, Thomas; Guo, Weihong; Jeter, Dana; Mizutani, Claudia M.; Stopczynski, Nathan; Sousa-Neves, Rui

    2017-01-01

    Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often used by scientists and medical practitioners. Segmenting 3D confocal microscopy images containing cells of different shapes and sizes is still challenging as the nuclei are closely packed. The watershed transform provides an efficient tool in segmenting such nuclei provided a reasonable set of markers can be found in the image. In the presence of low-contrast variation or excessive noise in the given image, the watershed transform leads to over-segmentation (a single object is overly split into multiple objects). The traditional watershed uses the local minima of the input image and will characteristically find multiple minima in one object unless they are specified (marker-controlled watershed). An alternative to using the local minima is by a supervised technique called seeded watershed, which supplies single seeds to replace the minima for the objects. Consequently, the accuracy of a seeded watershed algorithm relies on the accuracy of the predefined seeds. In this paper, we present a segmentation approach based on the geometric morphological properties of the ‘landscape’ using curvatures. The curvatures are computed as the eigenvalues of the Shape matrix, producing accurate seeds that also inherit the original shape of their respective cells. We compare with some popular approaches and show the advantage of the proposed method. PMID:28280723

  7. 3D CT spine data segmentation and analysis of vertebrae bone lesions.

    PubMed

    Peter, R; Malinsky, M; Ourednicek, P; Jan, J

    2013-01-01

    A method is presented aiming at detecting and classifying bone lesions in 3D CT data of human spine, via Bayesian approach utilizing Markov random fields. A developed algorithm for necessary segmentation of individual possibly heavily distorted vertebrae based on 3D intensity modeling of vertebra types is presented as well.

  8. Segmentation of 3D objects using live wire

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Udupa, Jayaram K.

    1997-04-01

    We have been developing user-steered image segmentation methods for situations which require considerable user assistance in object definition. In such situations, our segmentation methods aim (1) to provide effective control to the user on the segmentation process while it is being executed and (2) to minimize the total user's time required in the process. In the past, we have presented two paradigms, referred to as live wire and live lane, for segmenting 3D/4D object boundaries in a slice-by-slice fashion. In this paper, we introduce a 3D extension of the live wire approach which can further reduce the time spent by the user in the segmentation process. In 2D live wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment (as a set of oriented pixel edges) is the minimum-cost path between the two points. This segment is found via dynamic programming in real time as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary in this slice is identified as a set of consecutive boundary segments forming a 'closed,' 'connected,' 'oriented' contour. The strategy of the 3D extension is that, first, users specify contours via live- wiring on a few orthogonal slices. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to do live-wiring automatically on all axial slices of the 3D scene. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live wire is statistically significantly (p less than 0.0001) more repeatable and 2 - 6 times faster (p less than 0.01) than the 2D live wire method and 3 - 15 times faster than manual tracing.

  9. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  10. Random Walk Based Segmentation for the Prostate on 3D Transrectal Ultrasound Images.

    PubMed

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Nieh, Peter T; Master, Viraj V; Schuster, David M; Fei, Baowei

    2016-02-27

    This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.

  11. Random walk based segmentation for the prostate on 3D transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Nieh, Peter T.; Master, Viraj V.; Schuster, David M.; Fei, Baowei

    2016-03-01

    This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37+/-0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.

  12. Random Walk Based Segmentation for the Prostate on 3D Transrectal Ultrasound Images

    PubMed Central

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Nieh, Peter T.; Master, Viraj V.; Schuster, David M.; Fei, Baowei

    2016-01-01

    This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications. PMID:27660383

  13. Segmentation of brain blood vessels using projections in 3-D CT angiography images.

    PubMed

    Babin, Danilo; Vansteenkiste, Ewout; Pizurica, Aleksandra; Philips, Wilfried

    2011-01-01

    Segmenting cerebral blood vessels is of great importance in diagnostic and clinical applications, especially in quantitative diagnostics and surgery on aneurysms and arteriovenous malformations (AVM). Segmentation of CT angiography images requires algorithms robust to high intensity noise, while being able to segment low-contrast vessels. Because of this, most of the existing methods require user intervention. In this work we propose an automatic algorithm for efficient segmentation of 3-D CT angiography images of cerebral blood vessels. Our method is robust to high intensity noise and is able to accurately segment blood vessels with high range of luminance values, as well as low-contrast vessels.

  14. 3D segmentations of neuronal nuclei from confocal microscope image stacks.

    PubMed

    Latorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; Defelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario-the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei.

  15. 3D segmentations of neuronal nuclei from confocal microscope image stacks

    PubMed Central

    LaTorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; DeFelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario—the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei. PMID:24409123

  16. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  17. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  18. Midbrain segmentation in transcranial 3D ultrasound for Parkinson diagnosis.

    PubMed

    Ahmadi, Seyed-Ahmad; Baust, Maximilian; Karamalis, Athanasios; Plate, Annika; Boetzel, Kai; Klein, Tassilo; Navab, Nassir

    2011-01-01

    Ultrasound examination of the human brain through the temporal bone window, also called transcranial ultrasound (TC-US), is a completely non-invasive and cost-efficient technique, which has established itself for differential diagnosis of Parkinson's Disease (PD) in the past decade. The method requires spatial analysis of ultrasound hyperechogenicities produced by pathological changes within the Substantia Nigra (SN), which belongs to the basal ganglia within the midbrain. Related work on computer aided PD diagnosis shows the urgent need for an accurate and robust segmentation of the midbrain from 3D TC-US, which is an extremely difficult task due to poor image quality of TC-US. In contrast to 2D segmentations within earlier approaches, we develop the first method for semi-automatic midbrain segmentation from 3D TC-US and demonstrate its potential benefit on a database of 11 diagnosed Parkinson patients and 11 healthy controls.

  19. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    PubMed Central

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; De Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-01-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the “gold standard”. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81–0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck. PMID:24346241

  20. Volumetric CT-based segmentation of NSCLC using 3D-Slicer

    NASA Astrophysics Data System (ADS)

    Velazquez, Emmanuel Rios; Parmar, Chintan; Jermoumi, Mohammed; Mak, Raymond H.; van Baardwijk, Angela; Fennessy, Fiona M.; Lewis, John H.; de Ruysscher, Dirk; Kikinis, Ron; Lambin, Philippe; Aerts, Hugo J. W. L.

    2013-12-01

    Accurate volumetric assessment in non-small cell lung cancer (NSCLC) is critical for adequately informing treatments. In this study we assessed the clinical relevance of a semiautomatic computed tomography (CT)-based segmentation method using the competitive region-growing based algorithm, implemented in the free and public available 3D-Slicer software platform. We compared the 3D-Slicer segmented volumes by three independent observers, who segmented the primary tumour of 20 NSCLC patients twice, to manual slice-by-slice delineations of five physicians. Furthermore, we compared all tumour contours to the macroscopic diameter of the tumour in pathology, considered as the ``gold standard''. The 3D-Slicer segmented volumes demonstrated high agreement (overlap fractions > 0.90), lower volume variability (p = 0.0003) and smaller uncertainty areas (p = 0.0002), compared to manual slice-by-slice delineations. Furthermore, 3D-Slicer segmentations showed a strong correlation to pathology (r = 0.89, 95%CI, 0.81-0.94). Our results show that semiautomatic 3D-Slicer segmentations can be used for accurate contouring and are more stable than manual delineations. Therefore, 3D-Slicer can be employed as a starting point for treatment decisions or for high-throughput data mining research, such as Radiomics, where manual delineating often represent a time-consuming bottleneck.

  1. Chest wall segmentation in automated 3D breast ultrasound scans.

    PubMed

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm.

  2. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  3. 3D Brain Segmentation Using Dual-Front Active Contours with Optional User Interaction

    PubMed Central

    Yezzi, Anthony; Cohen, Laurent D.

    2006-01-01

    Important attributes of 3D brain cortex segmentation algorithms include robustness, accuracy, computational efficiency, and facilitation of user interaction, yet few algorithms incorporate all of these traits. Manual segmentation is highly accurate but tedious and laborious. Most automatic techniques, while less demanding on the user, are much less accurate. It would be useful to employ a fast automatic segmentation procedure to do most of the work but still allow an expert user to interactively guide the segmentation to ensure an accurate final result. We propose a novel 3D brain cortex segmentation procedure utilizing dual-front active contours which minimize image-based energies in a manner that yields flexibly global minimizers based on active regions. Region-based information and boundary-based information may be combined flexibly in the evolution potentials for accurate segmentation results. The resulting scheme is not only more robust but much faster and allows the user to guide the final segmentation through simple mouse clicks which add extra seed points. Due to the flexibly global nature of the dual-front evolution model, single mouse clicks yield corrections to the segmentation that extend far beyond their initial locations, thus minimizing the user effort. Results on 15 simulated and 20 real 3D brain images demonstrate the robustness, accuracy, and speed of our scheme compared with other methods. PMID:23165037

  4. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  5. 3D modeling of geological anomalies based on segmentation of multiattribute fusion

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-Ning; Song, Cheng-Yun; Li, Zhi-Yong; Cai, Han-Peng; Yao, Xing-Miao; Hu, Guang-Min

    2016-09-01

    3D modeling of geological bodies based on 3D seismic data is used to define the shape and volume of the bodies, which then can be directly applied to reservoir prediction, reserve estimation, and exploration. However, multiattributes are not effectively used in 3D modeling. To solve this problem, we propose a novel method for building of 3D model of geological anomalies based on the segmentation of multiattribute fusion. First, we divide the seismic attributes into edge- and region-based seismic attributes. Then, the segmentation model incorporating the edge- and region-based models is constructed within the levelset-based framework. Finally, the marching cubes algorithm is adopted to extract the zero level set based on the segmentation results and build the 3D model of the geological anomaly. Combining the edge-and region-based attributes to build the segmentation model, we satisfy the independence requirement and avoid the problem of insufficient data of single seismic attribute in capturing the boundaries of geological anomalies. We apply the proposed method to seismic data from the Sichuan Basin in southwestern China and obtain 3D models of caves and channels. Compared with 3D models obtained based on single seismic attributes, the results are better agreement with reality.

  6. Segmentation and length measurement of the abdominal blood vessels in 3-D MRI images.

    PubMed

    Babin, Danilo; Vansteenkiste, Ewout; Pizurica, Aleksandra; Philips, Wilfried

    2009-01-01

    In diagnosing diseases and planning surgeries the structure and length of blood vessels is of great importance. In this research we develop a novel method for the segmentation of 2-D and 3-D images with an application to blood vessel length measurements in 3-D abdominal MRI images. Our approach is robust to noise and does not require contrast-enhanced images for segmentation. We use an effective algorithm for skeletonization, graph construction and shortest path estimation to measure the length of blood vessels of interest.

  7. Dynamic deformable models for 3D MRI heart segmentation

    NASA Astrophysics Data System (ADS)

    Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.

    2002-05-01

    Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.

  8. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  9. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    PubMed

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  10. Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation

    PubMed Central

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2015-01-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  11. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  12. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  13. Segmentation of whole cells and cell nuclei from 3-D optical microscope images using dynamic programming.

    PubMed

    McCullough, D P; Gudla, P R; Harris, B S; Collins, J A; Meaburn, K J; Nakaya, M A; Yamaguchi, T P; Misteli, T; Lockett, S J

    2008-05-01

    Communications between cells in large part drive tissue development and function, as well as disease-related processes such as tumorigenesis. Understanding the mechanistic bases of these processes necessitates quantifying specific molecules in adjacent cells or cell nuclei of intact tissue. However, a major restriction on such analyses is the lack of an efficient method that correctly segments each object (cell or nucleus) from 3-D images of an intact tissue specimen. We report a highly reliable and accurate semi-automatic algorithmic method for segmenting fluorescence-labeled cells or nuclei from 3-D tissue images. Segmentation begins with semi-automatic, 2-D object delineation in a user-selected plane, using dynamic programming (DP) to locate the border with an accumulated intensity per unit length greater that any other possible border around the same object. Then the two surfaces of the object in planes above and below the selected plane are found using an algorithm that combines DP and combinatorial searching. Following segmentation, any perceived errors can be interactively corrected. Segmentation accuracy is not significantly affected by intermittent labeling of object surfaces, diffuse surfaces, or spurious signals away from surfaces. The unique strength of the segmentation method was demonstrated on a variety of biological tissue samples where all cells, including irregularly shaped cells, were accurately segmented based on visual inspection.

  14. Iterative Mesh Transformation for 3D Segmentation of Livers with Cancers in CT Images

    PubMed Central

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-01-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semiautomated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases. PMID:25728595

  15. Iterative mesh transformation for 3D segmentation of livers with cancers in CT images.

    PubMed

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-07-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semi-automated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases.

  16. Segmentation of bone structures in 3D CT images based on continuous max-flow optimization

    NASA Astrophysics Data System (ADS)

    Pérez-Carrasco, J. A.; Acha-Piñero, B.; Serrano, C.

    2015-03-01

    In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.

  17. The iterative image foresting transform and its application to user-steered 3D segmentation

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Bergo, Felipe P. G.

    2003-05-01

    Segmentation and 3D visualization at interactive speeds are highly desirable for routine use in clinical settings. We circumvent this problem in the framework of the image foresting transform (IFT) - a graph-based approach to the design of image processing operators. In this paper we introduce the iterative image foresting transform (IFT+), which computes sequences of IFTs in a differencial way, present the general IFT+ algorithm, and instantiate it to be a watershed transform. The IFT+-watershed transform is evaluated in the context of interactive segmentation, where the user makes corrections by adding/removing scene regions with mouse clicks. The IFT+-watershed requires time proportional to the number of voxels in the modified regions, while the conventional algorithm computes one watershed transform over the entire scene for each iteration. The IFT+-watershed is 5.75 times faster than the watershed and considerably reduces from 17.7 to 3.16 seconds the user's waiting time in segmentation with 3D visualization. These results were obtained in an 1.5GHz Pentium-IV PC over 10 MR scenes of the head, requiring 12 to 28 corrections to segment cerebellum, pons-medulla, ventricle, and the rest of the brain, simultaneously. These results indicate that the IFT+ is a significant contribution toward interactive segmentation and 3D visualization.

  18. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  19. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  20. MRI Slice Segmentation and 3D Modelling of Temporomandibular Joint Measured by Microscopic Coil

    NASA Astrophysics Data System (ADS)

    Smirg, O.; Liberda, O.; Smekal, Z.; Sprlakova-Pukova, A.

    2012-01-01

    The paper focuses on the segmentation of magnetic resonance imaging (MRI) slices and 3D modelling of the temporomandibular joint disc in order to help physicians diagnose patients with dysfunction of the temporomandibular joint (TMJ). The TMJ is one of the most complex joints in the human body. The most common joint dysfunction is due to the disc. The disc is a soft tissue, which in principle cannot be diagnosed by the CT method. Therefore, a 3D model is made from the MRI slices, which can image soft tissues. For the segmentation of the disc in individual slices a new method is developed based on spatial distribution and anatomical TMJ structure with automatic thresholding. The thresholding is controlled by a genetic algorithm. The 3D model is realized using the marching cube method.

  1. A 3-D liver segmentation method with parallel computing for selective internal radiation therapy.

    PubMed

    Goryawala, Mohammed; Guillen, Magno R; Cabrerizo, Mercedes; Barreto, Armando; Gulec, Seza; Barot, Tushar C; Suthar, Rekha R; Bhatt, Ruchir N; Mcgoron, Anthony; Adjouadi, Malek

    2012-01-01

    This study describes a new 3-D liver segmentation method in support of the selective internal radiation treatment as a treatment for liver tumors. This 3-D segmentation is based on coupling a modified k-means segmentation method with a special localized contouring algorithm. In the segmentation process, five separate regions are identified on the computerized tomography image frames. The merit of the proposed method lays in its potential to provide fast and accurate liver segmentation and 3-D rendering as well as in delineating tumor region(s), all with minimal user interaction. Leveraging of multicore platforms is shown to speed up the processing of medical images considerably, making this method more suitable in clinical settings. Experiments were performed to assess the effect of parallelization using up to 442 slices. Empirical results, using a single workstation, show a reduction in processing time from 4.5 h to almost 1 h for a 78% gain. Most important is the accuracy achieved in estimating the volumes of the liver and tumor region(s), yielding an average error of less than 2% in volume estimation over volumes generated on the basis of the current manually guided segmentation processes. Results were assessed using the analysis of variance statistical analysis.

  2. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  3. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  4. Correlation-based discrimination between cardiac tissue and blood for segmentation of 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Saris, Anne E. C. M.; Nillesen, Maartje M.; Lopata, Richard G. P.; de Korte, Chris L.

    2013-03-01

    Automated segmentation of 3D echocardiographic images in patients with congenital heart disease is challenging, because the boundary between blood and cardiac tissue is poorly defined in some regions. Cardiologists mentally incorporate movement of the heart, using temporal coherence of structures to resolve ambiguities. Therefore, we investigated the merit of temporal cross-correlation for automated segmentation over the entire cardiac cycle. Optimal settings for maximum cross-correlation (MCC) calculation, based on a 3D cross-correlation based displacement estimation algorithm, were determined to obtain the best contrast between blood and myocardial tissue over the entire cardiac cycle. Resulting envelope-based as well as RF-based MCC values were used as additional external force in a deformable model approach, to segment the left-ventricular cavity in entire systolic phase. MCC values were tested against, and combined with, adaptive filtered, demodulated RF-data. Segmentation results were compared with manually segmented volumes using a 3D Dice Similarity Index (3DSI). Results in 3D pediatric echocardiographic images sequences (n = 4) demonstrate that incorporation of temporal information improves segmentation. The use of MCC values, either alone or in combination with adaptive filtered, demodulated RF-data, resulted in an increase of the 3DSI in 75% of the cases (average 3DSI increase: 0.71 to 0.82). Results might be further improved by optimizing MCC-contrast locally, in regions with low blood-tissue contrast. Reducing underestimation of the endocardial volume due to MCC processing scheme (choice of window size) and consequential border-misalignment, could also lead to more accurate segmentations. Furthermore, increasing the frame rate will also increase MCC-contrast and thus improve segmentation.

  5. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  6. Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Li, Kun; Hou, Chunping

    2015-05-01

    With the advances of depth sensing technologies, color image plus depth information (referred to as RGB-D data hereafter) is more and more popular for comprehensive description of 3-D scenes. This paper proposes a two-stage segmentation method for RGB-D data: 1) oversegmentation by 3-D geometry enhanced superpixels and 2) graph-based merging with label cost from superpixels. In the oversegmentation stage, 3-D geometrical information is reconstructed from the depth map. Then, a K-means-like clustering method is applied to the RGB-D data for oversegmentation using an 8-D distance metric constructed from both color and 3-D geometrical information. In the merging stage, treating each superpixel as a node, a graph-based model is set up to relabel the superpixels into semantically-coherent segments. In the graph-based model, RGB-D proximity, texture similarity, and boundary continuity are incorporated into the smoothness term to exploit the correlations of neighboring superpixels. To obtain a compact labeling, the label term is designed to penalize labels linking to similar superpixels that likely belong to the same object. Both the proposed 3-D geometry enhanced superpixel clustering method and the graph-based merging method from superpixels are evaluated by qualitative and quantitative results. By the fusion of color and depth information, the proposed method achieves superior segmentation performance over several state-of-the-art algorithms.

  7. Ultrafast superpixel segmentation of large 3D medical datasets

    NASA Astrophysics Data System (ADS)

    Leblond, Antoine; Kauffmann, Claude

    2016-03-01

    Even with recent hardware improvements, superpixel segmentation of large 3D medical images at interactive speed (<500 ms) remains a challenge. We will describe methods to achieve such performances using a GPU based hybrid framework implementing wavefront propagation and cellular automata resolution. Tasks will be scheduled in blocks (work units) using a wavefront propagation strategy, therefore allowing sparse scheduling. Because work units has been designed as spatially cohesive, the fast Thread Group Shared Memory can be used and reused through a Gauss-Seidel like acceleration. The work unit partitioning scheme will however vary on odd- and even-numbered iterations to reduce convergence barriers. Synchronization will be ensured by an 8-step 3D variant of the traditional Red Black Ordering scheme. An attack model and early termination will also be described and implemented as additional acceleration techniques. Using our hybrid framework and typical operating parameters, we were able to compute the superpixels of a high-resolution 512x512x512 aortic angioCT scan in 283 ms using a AMD R9 290X GPU. We achieved a 22.3X speed-up factor compared to the published reference GPU implementation.

  8. Automated Reconstruction Algorithm for Identification of 3D Architectures of Cribriform Ductal Carcinoma In Situ

    PubMed Central

    Norton, Kerri-Ann; Namazi, Sameera; Barnard, Nicola; Fujibayashi, Mariko; Bhanot, Gyan; Ganesan, Shridar; Iyatomi, Hitoshi; Ogawa, Koichi; Shinbrot, Troy

    2012-01-01

    Ductal carcinoma in situ (DCIS) is a pre-invasive carcinoma of the breast that exhibits several distinct morphologies but the link between morphology and patient outcome is not clear. We hypothesize that different mechanisms of growth may still result in similar 2D morphologies, which may look different in 3D. To elucidate the connection between growth and 3D morphology, we reconstruct the 3D architecture of cribriform DCIS from resected patient material. We produce a fully automated algorithm that aligns, segments, and reconstructs 3D architectures from microscopy images of 2D serial sections from human specimens. The alignment algorithm is based on normalized cross correlation, the segmentation algorithm uses histogram equilization, Otsu's thresholding, and morphology techniques to segment the duct and cribra. The reconstruction method combines these images in 3D. We show that two distinct 3D architectures are indeed found in samples whose 2D histological sections are similarly identified as cribriform DCIS. These differences in architecture support the hypothesis that luminal spaces may form due to different mechanisms, either isolated cell death or merging fronds, leading to the different architectures. We find that out of 15 samples, 6 were found to have ‘bubble-like’ cribra, 6 were found to have ‘tube-like’ criba and 3 were ‘unknown.’ We propose that the 3D architectures found, ‘bubbles’ and ‘tubes’, account for some of the heterogeneity of the disease and may be prognostic indicators of different patient outcomes. PMID:22970156

  9. 3D Materials image segmentation by 2D propagation: a graph-cut approach considering homomorphism.

    PubMed

    Waggoner, Jarrell; Zhou, Youjie; Simmons, Jeff; De Graef, Marc; Wang, Song

    2013-12-01

    Segmentation propagation, similar to tracking, is the problem of transferring a segmentation of an image to a neighboring image in a sequence. This problem is of particular importance to materials science, where the accurate segmentation of a series of 2D serial-sectioned images of multiple, contiguous 3D structures has important applications. Such structures may have distinct shape, appearance, and topology, which can be considered to improve segmentation accuracy. For example, some materials images may have structures with a specific shape or appearance in each serial section slice, which only changes minimally from slice to slice, and some materials may exhibit specific inter-structure topology that constrains their neighboring relations. Some of these properties have been individually incorporated to segment specific materials images in prior work. In this paper, we develop a propagation framework for materials image segmentation where each propagation is formulated as an optimal labeling problem that can be efficiently solved using the graph-cut algorithm. Our framework makes three key contributions: 1) a homomorphic propagation approach, which considers the consistency of region adjacency in the propagation; 2) incorporation of shape and appearance consistency in the propagation; and 3) a local non-homomorphism strategy to handle newly appearing and disappearing substructures during this propagation. To show the effectiveness of our framework, we conduct experiments on various 3D materials images, and compare the performance against several existing image segmentation methods.

  10. 3-D segmentation of articular cartilages by graph cuts using knee MR images from osteoarthritis initiative

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae

    2008-03-01

    Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.

  11. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  12. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  13. Segmentation of Skin Tumors in High-Frequency 3-D Ultrasound Images.

    PubMed

    Sciolla, Bruno; Cowell, Lester; Dambry, Thibaut; Guibert, Benoît; Delachartre, Philippe

    2017-01-01

    High-frequency 3-D ultrasound imaging is an informative tool for diagnosis, surgery planning and skin lesion examination. The purpose of this article was to describe a semi-automated segmentation tool providing easy access to the extent, shape and volume of a lesion. We propose an adaptive log-likelihood level-set segmentation procedure using non-parametric estimates of the intensity distribution. The algorithm has a single parameter to control the smoothness of the contour, and we describe how a fixed value yields satisfactory segmentation results with an average Dice coefficient of D = 0.76. The algorithm is implemented on a grid, which increases the speed by a factor of 100 compared with a standard pixelwise segmentation. We compare the method with parametric methods making the hypothesis of Rayleigh or Nakagami distributed signals, and illustrate that our method has greater robustness with similar computational speed. Benchmarks are made on realistic synthetic ultrasound images and a data set of nine clinical 3-D images acquired with a 50-MHz imaging system. The proposed algorithm is suitable for use in a clinical context as a post-processing tool.

  14. Segmentation of the common carotid artery with active shape models from 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Jin, Jiaoying; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2012-03-01

    Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, we develop and evaluate a new segmentation method for outlining both lumen and adventitia (inner and outer walls) of common carotid artery (CCA) from three-dimensional ultrasound (3D US) images for carotid atherosclerosis diagnosis and evaluation. The data set consists of sixty-eight, 17× 2× 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80mg atorvastain and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. We investigate the use of Active Shape Models (ASMs) to segment CCA inner and outer walls after statin therapy. The proposed method was evaluated with respect to expert manually outlined boundaries as a surrogate for ground truth. For the lumen and adventitia segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 93.6%+/- 2.6%, 91.8%+/- 3.5%, mean absolute distances (MAD) of 0.28+/- 0.17mm and 0.34 +/- 0.19mm, maximum absolute distances (MAXD) of 0.87 +/- 0.37mm and 0.74 +/- 0.49mm. The proposed algorithm took 4.4 +/- 0.6min to segment a single 3D US images, compared to 11.7+/-1.2min for manual segmentation. Therefore, the method would promote the translation of carotid 3D US to clinical care for the fast, safety and economical monitoring of the atherosclerotic disease progression and regression during therapy.

  15. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci.

  16. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  17. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  18. Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mu, Jian; Yang, Lin; Kamocka, Malgorzata M.; Zollman, Amy L.; Carlesso, Nadia; Chen, Danny Z.

    2015-03-01

    In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.

  19. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  20. Quasi-3D Algorithm in Multi-scale Modeling Framework

    NASA Astrophysics Data System (ADS)

    Jung, J.; Arakawa, A.

    2008-12-01

    As discussed in the companion paper by Arakawa and Jung, the Quasi-3D (Q3D) Multi-scale Modeling Framework (MMF) is a 4D estimation/prediction framework that combines a GCM with a 3D anelastic vector vorticity equation model (VVM) applied to a Q3D network of horizontal grid points. This paper presents an outline of the recently revised Q3D algorithm and a highlight of the results obtained by application of the algorithm to an idealized model setting. The Q3D network of grid points consists of two sets of grid-point arrays perpendicular to each other. For a scalar variable, for example, each set consists of three parallel rows of grid points. Principal and supplementary predictions are made on the central and the two adjacent rows, respectively. The supplementary prediction is to allow the principal prediction be three-dimensional at least to the second-order accuracy. To accommodate a higher-order accuracy and to make the supplementary predictions formally three-dimensional, a few rows of ghost points are added at each side of the array. Values at these ghost points are diagnostically determined by a combination of statistical estimation and extrapolation. The basic structure of the estimation algorithm is determined in view of the global stability of Q3D advection. The algorithm is calibrated using the statistics of past data at and near the intersections of the two sets of grid- point arrays. Since the CRM in the Q3D MMF extends beyond individual GCM boxes, the CRM can be a GCM by itself. However, it is better to couple the CRM with the GCM because (1) the CRM is a Q3D CRM based on a highly anisotropic network of grid points and (2) coupling with a GCM makes it more straightforward to inherit our experience with the conventional GCMs. In the coupled system we have selected, prediction of thermdynamic variables is almost entirely done by the Q3D CRM with no direct forcing by the GCM. The coupling of the dynamics between the two components is through mutual

  1. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  2. Deep Learning Segmentation of Optical Microscopy Images Improves 3D Neuron Reconstruction.

    PubMed

    Li, Rongjian; Zeng, Tao; Peng, Hanchuan; Ji, Shuiwang

    2017-03-08

    Digital reconstruction, or tracing, of 3-dimensional (3D) neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this work, we proposed to use 3D Convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.

  3. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  4. Automated three-dimensional choroidal vessel segmentation of 3D 1060 nm OCT retinal data

    PubMed Central

    Kajić, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F.; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G.; Drexler, Wolfgang

    2012-01-01

    A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of “probability cones” to determine the vessel “core”, even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes. PMID:23304653

  5. Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images.

    PubMed

    Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel

    2017-01-01

    The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images.

  6. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  7. Computerized Liver Volumetry on MRI by Using 3D Geodesic Active Contour Segmentation

    PubMed Central

    Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji

    2014-01-01

    OBJECTIVE Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. SUBJECTS AND METHODS Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. RESULTS The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). CONCLUSION The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time. PMID:24370139

  8. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  9. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  10. Automatic segmentation and analysis of fibrin networks in 3D confocal microscopy images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomin; Mu, Jian; Machlus, Kellie R.; Wolberg, Alisa S.; Rosen, Elliot D.; Xu, Zhiliang; Alber, Mark S.; Chen, Danny Z.

    2012-02-01

    Fibrin networks are a major component of blood clots that provides structural support to the formation of growing clots. Abnormal fibrin networks that are too rigid or too unstable can promote cardiovascular problems and/or bleeding. However, current biological studies of fibrin networks rarely perform quantitative analysis of their structural properties (e.g., the density of branch points) due to the massive branching structures of the networks. In this paper, we present a new approach for segmenting and analyzing fibrin networks in 3D confocal microscopy images. We first identify the target fibrin network by applying the 3D region growing method with global thresholding. We then produce a one-voxel wide centerline for each fiber segment along which the branch points and other structural information of the network can be obtained. Branch points are identified by a novel approach based on the outer medial axis. Cells within the fibrin network are segmented by a new algorithm that combines cluster detection and surface reconstruction based on the α-shape approach. Our algorithm has been evaluated on computer phantom images of fibrin networks for identifying branch points. Experiments on z-stack images of different types of fibrin networks yielded results that are consistent with biological observations.

  11. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  12. Brain tumor segmentation in 3D MRIs using an improved Markov random field model

    NASA Astrophysics Data System (ADS)

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2011-10-01

    Markov Random Field (MRF) models have been recently suggested for MRI brain segmentation by a large number of researchers. By employing Markovianity, which represents the local property, MRF models are able to solve a global optimization problem locally. But they still have a heavy computation burden, especially when they use stochastic relaxation schemes such as Simulated Annealing (SA). In this paper, a new 3D-MRF model is put forward to raise the speed of the convergence. Although, search procedure of SA is fairly localized and prevents from exploring the same diversity of solutions, it suffers from several limitations. In comparison, Genetic Algorithm (GA) has a good capability of global researching but it is weak in hill climbing. Our proposed algorithm combines SA and an improved GA (IGA) to optimize the solution which speeds up the computation time. What is more, this proposed algorithm outperforms the traditional 2D-MRF in quality of the solution.

  13. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours

    PubMed Central

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-01-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: First, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction — This not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we

  14. Conservative Patch Algorithm and Mesh Sequencing for PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Abdol-Hamid, K. S.

    2005-01-01

    A mesh-sequencing algorithm and a conservative patched-grid-interface algorithm (hereafter Patch Algorithm ) have been incorporated into the PAB3D code, which is a computer program that solves the Navier-Stokes equations for the simulation of subsonic, transonic, or supersonic flows surrounding an aircraft or other complex aerodynamic shapes. These algorithms are efficient, flexible, and have added tremendously to the capabilities of PAB3D. The mesh-sequencing algorithm makes it possible to perform preliminary computations using only a fraction of the grid cells (provided the original cell count is divisible by an integer) along any grid coordinate axis, independently of the other axes. The patch algorithm addresses another critical need in multi-block grid situation where the cell faces of adjacent grid blocks may not coincide, leading to errors in calculating fluxes of conserved physical quantities across interfaces between the blocks. The patch algorithm, based on the Stokes integral formulation of the applicable conservation laws, effectively matches each of the interfacial cells on one side of the block interface to the corresponding fractional cell area pieces on the other side. This approach is comprehensive and unified such that all interface topology is automatically processed without user intervention. This algorithm is implemented in a preprocessing code that creates a cell-by-cell database that will maintain flux conservation at any level of full or reduced grid density as the user may choose by way of the mesh-sequencing algorithm. These two algorithms have enhanced the numerical accuracy of the code, reduced the time and effort for grid preprocessing, and provided users with the flexibility of performing computations at any desired full or reduced grid resolution to suit their specific computational requirements.

  15. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  16. Efficient global optimization based 3D carotid AB-LIB MRI segmentation by simultaneously evolving coupled surfaces.

    PubMed

    Ukwatta, Eranga; Yuan, Jing; Rajchl, Martin; Fenster, Aaron

    2012-01-01

    Magnetic resonance (MR) imaging of carotid atherosclerosis biomarkers are increasingly being investigated for the risk assessment of vulnerable plaques. A fast and robust 3D segmentation of the carotid adventitia (AB) and lumen-intima (LIB) boundaries can greatly alleviate the measurement burden of generating quantitative imaging biomarkers in clinical research. In this paper, we propose a novel global optimization-based approach to segment the carotid AB and LIB from 3D T1-weighted black blood MR images, by simultaneously evolving two coupled surfaces with enforcement of anatomical consistency of the AB and LIB. We show that the evolution of two surfaces at each discrete time-frame can be optimized exactly and globally by means of convex relaxation. Our continuous max-flow based algorithm is implemented in GPUs to achieve high computational performance. The experiment results from 16 carotid MR images show that the algorithm obtained high agreement with manual segmentations and achieved high repeatability in segmentation.

  17. Semi-automatic 3D segmentation of costal cartilage in CT data from Pectus Excavatum patients

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Queirós, Sandro; Rodrigues, Nuno; Correia-Pinto, Jorge; Vilaça, J.

    2015-03-01

    One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69+/-0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.

  18. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    PubMed

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  19. Using 3-D shape models to guide segmentation of MR brain images.

    PubMed Central

    Hinshaw, K. P.; Brinkley, J. F.

    1997-01-01

    Accurate segmentation of medical images poses one of the major challenges in computer vision. Approaches that rely solely on intensity information frequently fail because similar intensity values appear in multiple structures. This paper presents a method for using shape knowledge to guide the segmentation process, applying it to the task of finding the surface of the brain. A 3-D model that includes local shape constraints is fitted to an MR volume dataset. The resulting low-resolution surface is used to mask out regions far from the cortical surface, enabling an isosurface extraction algorithm to isolate a more detailed surface boundary. The surfaces generated by this technique are comparable to those achieved by other methods, without requiring user adjustment of a large number of ad hoc parameters. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9357670

  20. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  1. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  2. 3D ultrasound image segmentation using multiple incomplete feature sets

    NASA Astrophysics Data System (ADS)

    Fan, Liexiang; Herrington, David M.; Santago, Peter, II

    1999-05-01

    We use three features, the intensity, texture and motion to obtain robust results for segmentation of intracoronary ultrasound images. Using a parameterized equation to describe the lumen-plaque and media-adventitia boundaries, we formulate the segmentation as a parameter estimation through a cost functional based on the posterior probability, which can handle the incompleteness of the features in ultrasound images by employing outlier detection.

  3. Active surface model improvement by energy function optimization for 3D segmentation.

    PubMed

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models.

  4. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  5. Deformable templates guided discriminative models for robust 3D brain MRI segmentation.

    PubMed

    Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen

    2013-10-01

    Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.

  6. A parallel algorithm for solving the 3d Schroedinger equation

    SciTech Connect

    Strickland, Michael; Yager-Elorriaga, David

    2010-08-20

    We describe a parallel algorithm for solving the time-independent 3d Schroedinger equation using the finite difference time domain (FDTD) method. We introduce an optimized parallelization scheme that reduces communication overhead between computational nodes. We demonstrate that the compute time, t, scales inversely with the number of computational nodes as t {proportional_to} (N{sub nodes}){sup -0.95} {sup {+-} 0.04}. This makes it possible to solve the 3d Schroedinger equation on extremely large spatial lattices using a small computing cluster. In addition, we present a new method for precisely determining the energy eigenvalues and wavefunctions of quantum states based on a symmetry constraint on the FDTD initial condition. Finally, we discuss the usage of multi-resolution techniques in order to speed up convergence on extremely large lattices.

  7. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  8. Vessel segmentation in 3D spectral OCT scans of the retina

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.

    2008-03-01

    The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.

  9. Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.

    2001-01-01

    A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.

  10. Segmentation and interpretation of 3D protein images

    SciTech Connect

    Leherte, L.; Baxter, K.; Glasgow, J.; Fortier, S.

    1994-12-31

    The segmentation and interpretation of three-dimensional images of proteins is considered. A topological approach is used to represent a protein structure as a spanning tree of critical points, where each critical point corresponds to a residue or the connectivity between residues. The critical points are subsequently analyzed to recognize secondary structure motifs within the protein. Results of applying the approach to ideal and experimental images of proteins at medium resolution are presented.

  11. 3D Hail Size Distribution Interpolation/Extrapolation Algorithm

    NASA Technical Reports Server (NTRS)

    Lane, John

    2013-01-01

    Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.

  12. Irregular Grid Generation and Rapid 3D Color Display Algorithm

    SciTech Connect

    Wilson D. Chin, Ph.D.

    2000-05-10

    Computationally efficient and fast methods for irregular grid generation are developed to accurately characterize wellbore and fracture boundaries, and farfield reservoir boundaries, in oil and gas petroleum fields. Advanced reservoir simulation techniques are developed for oilfields described by such ''boundary conforming'' mesh systems. Very rapid, three-dimensional color display algorithms are also developed that allow users to ''interrogate'' 3D earth cubes using ''slice, rotate, and zoom'' functions. Based on expert system ideas, the new methods operate much faster than existing display methodologies and do not require sophisticated computer hardware or software. They are designed to operate with PC based applications.

  13. Shape representation for efficient landmark-based segmentation in 3-d.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2014-04-01

    In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times.

  14. Automatic segmentation and 3D feature extraction of protein aggregates in Caenorhabditis elegans

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Teixeira-Castro, Andreia; Oliveira, João; Dias, Nuno; Rodrigues, Nuno F.; Vilaça, João L.

    2012-03-01

    In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals' transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey's biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention.

  15. Image intensity standardization in 3D rotational angiography and its application to vascular segmentation

    NASA Astrophysics Data System (ADS)

    Bogunović, Hrvoje; Radaelli, Alessandro G.; De Craene, Mathieu; Delgado, David; Frangi, Alejandro F.

    2008-03-01

    Knowledge-based vascular segmentation methods typically rely on a pre-built training set of segmented images, which is used to estimate the probability of each voxel to belong to a particular tissue. In 3D Rotational Angiography (3DRA) the same tissue can correspond to different intensity ranges depending on the imaging device, settings and contrast injection protocol. As a result, pre-built training sets do not apply to all images and the best segmentation results are often obtained when the training set is built specifically for each individual image. We present an Image Intensity Standardization (IIS) method designed to ensure a correspondence between specific tissues and intensity ranges common to every image that undergoes the standardization process. The method applies a piecewise linear transformation to the image that aligns the intensity histogram to the histogram taken as reference. The reference histogram has been selected from a high quality image not containing artificial objects such as coils or stents. This is a pre-processing step that allows employing a training set built on a limited number of standardized images for the segmentation of standardized images which were not part of the training set. The effectiveness of the presented IIS technique in combination with a well-validated knowledge-based vasculature segmentation method is quantified on a variety of 3DRA images depicting cerebral arteries and intracranial aneurysms. The proposed IIS method offers a solution to the standardization of tissue classes in routine medical images and effectively improves automation and usability of knowledge-based vascular segmentation algorithms.

  16. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  17. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  18. Diffusive smoothing of 3D segmented medical data

    PubMed Central

    Patané, Giuseppe

    2014-01-01

    This paper proposes an accurate, computationally efficient, and spectrum-free formulation of the heat diffusion smoothing on 3D shapes, represented as triangle meshes. The idea behind our approach is to apply a (r,r)-degree Padé–Chebyshev rational approximation to the solution of the heat diffusion equation. The proposed formulation is equivalent to solve r sparse, symmetric linear systems, is free of user-defined parameters, and is robust to surface discretization. We also discuss a simple criterion to select the time parameter that provides the best compromise between approximation accuracy and smoothness of the solution. Finally, our experiments on anatomical data show that the spectrum-free approach greatly reduces the computational cost and guarantees a higher approximation accuracy than previous work. PMID:26257940

  19. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  20. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    PubMed

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  1. 3D MR ventricle segmentation in pre-term infants with post-hemorrhagic ventricle dilation

    NASA Astrophysics Data System (ADS)

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; Chen, Yimin; de Ribaupierre, Sandrine; Chiu, Bernard; Fenster, Aaron

    2015-03-01

    Intraventricular hemorrhage (IVH) or bleed within the brain is a common condition among pre-term infants that occurs in very low birth weight preterm neonates. The prognosis is further worsened by the development of progressive ventricular dilatation, i.e., post-hemorrhagic ventricle dilation (PHVD), which occurs in 10-30% of IVH patients. In practice, predicting PHVD accurately and determining if that specific patient with ventricular dilatation requires the ability to measure accurately ventricular volume. While monitoring of PHVD in infants is typically done by repeated US and not MRI, once the patient has been treated, the follow-up over the lifetime of the patient is done by MRI. While manual segmentation is still seen as a gold standard, it is extremely time consuming, and therefore not feasible in a clinical context, and it also has a large inter- and intra-observer variability. This paper proposes a segmentation algorithm to extract the cerebral ventricles from 3D T1- weighted MR images of pre-term infants with PHVD. The proposed segmentation algorithm makes use of the convex optimization technique combined with the learned priors of image intensities and label probabilistic map, which is built from a multi-atlas registration scheme. The leave-one-out cross validation using 7 PHVD patient T1 weighted MR images showed that the proposed method yielded a mean DSC of 89.7% +/- 4.2%, a MAD of 2.6 +/- 1.1 mm, a MAXD of 17.8 +/- 6.2 mm, and a VD of 11.6% +/- 5.9%, suggesting a good agreement with manual segmentations.

  2. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  3. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.

    PubMed

    Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid

    2013-08-09

    : The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.

  4. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  5. a Review of Point Clouds Segmentation and Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Menna, F.; Remondino, F.

    2017-02-01

    Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

  6. Hybrid atlas-based and image-based approach for segmenting 3D brain MRIs

    NASA Astrophysics Data System (ADS)

    Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2001-07-01

    This work is a contribution to the problem of localizing key cerebral structures in 3D MRIs and its quantitative evaluation. In pursuing it, the cooperation between an image-based segmentation method and a hierarchical deformable registration approach has been considered. The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions of an image, I(s), from the data set are identified. These regions, M(I), are obtained combining information from deformable atlas, achieved by the warping of eight previous labeled maps on I(s). Then, the goal of the decision stage is to precisely locate the contours of the 3D regions set by the markers. This contour decision is performed by a 3D extension of the watershed transform. The anatomical structures taken into consideration and embedded into the atlas are brain, ventricles, corpus callosum, cerebellum, right and left hippocampus, medulla and midbrain. The hybrid method operates fully automatically and in 3D, successfully providing segmented brain structures. The quality of the segmentation has been studied in terms of the detected volume ratio by using kappa statistic and ROC analysis. Results of the method are shown and validated on a 3D MRI phantom. This study forms part of an on-going long term research aiming at the creation of a 3D probabilistic multi-purpose anatomical brain atlas.

  7. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  8. Combining 2D wavelet edge highlighting and 3D thresholding for lung segmentation in thin-slice CT.

    PubMed

    Korfiatis, P; Skiadopoulos, S; Sakellaropoulos, P; Kalogeropoulou, C; Costaridou, L

    2007-12-01

    The first step in lung analysis by CT is the identification of the lung border. To deal with the increased number of sections per scan in thin-slice multidetector CT, it has been crucial to develop accurate and automated lung segmentation algorithms. In this study, an automated method for lung segmentation of thin-slice CT data is presented. The method exploits the advantages of a two-dimensional wavelet edge-highlighting step in lung border delineation. Lung volume segmentation is achieved with three-dimensional (3D) grey level thresholding, using a minimum error technique. 3D thresholding, combined with the wavelet pre-processing step, successfully deals with lung border segmentation challenges, such as anterior or posterior junction lines and juxtapleural nodules. Finally, to deal with mediastinum border under-segmentation, 3D morphological closing with a spherical structural element is applied. The performance of the proposed method is quantitatively assessed on a dataset originating from the Lung Imaging Database Consortium (LIDC) by comparing automatically derived borders with the manually traced ones. Segmentation performance, averaged over left and right lung volumes, for lung volume overlap is 0.983+/-0.008, whereas for shape differentiation in terms of mean distance it is 0.770+/-0.251 mm (root mean square distance is 0.520+/-0.008 mm; maximum distance is 3.327+/-1.637 mm). The effect of the wavelet pre-processing step was assessed by comparing the proposed method with the 3D thresholding technique (applied on original volume data). This yielded statistically significant differences for all segmentation metrics (p<0.01). Results demonstrate an accurate method that could be used as a first step in computer lung analysis by CT.

  9. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    NASA Astrophysics Data System (ADS)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  10. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  11. User-guided segmentation of preterm neonate ventricular system from 3-D ultrasound images using convex optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; McLeod, Jonathan; Chen, Yimin; de Ribaupierre, Sandrine; Fenster, Aaron

    2015-02-01

    A three-dimensional (3-D) ultrasound (US) system has been developed to monitor the intracranial ventricular system of preterm neonates with intraventricular hemorrhage (IVH) and the resultant dilation of the ventricles (ventriculomegaly). To measure ventricular volume from 3-D US images, a semi-automatic convex optimization-based approach is proposed for segmentation of the cerebral ventricular system in preterm neonates with IVH from 3-D US images. The proposed semi-automatic segmentation method makes use of the convex optimization technique supervised by user-initialized information. Experiments using 58 patient 3-D US images reveal that our proposed approach yielded a mean Dice similarity coefficient of 78.2% compared with the surfaces that were manually contoured, suggesting good agreement between these two segmentations. Additional metrics, the mean absolute distance of 0.65 mm and the maximum absolute distance of 3.2 mm, indicated small distance errors for a voxel spacing of 0.22 × 0.22 × 0.22 mm(3). The Pearson correlation coefficient (r = 0.97, p < 0.001) indicated a significant correlation of algorithm-generated ventricular system volume (VSV) with the manually generated VSV. The calculated minimal detectable difference in ventricular volume change indicated that the proposed segmentation approach with 3-D US images is capable of detecting a VSV difference of 6.5 cm(3) with 95% confidence, suggesting that this approach might be used for monitoring IVH patients' ventricular changes using 3-D US imaging. The mean segmentation times of the graphics processing unit (GPU)- and central processing unit-implemented algorithms were 50 ± 2 and 205 ± 5 s for one 3-D US image, respectively, in addition to 120 ± 10 s for initialization, less than the approximately 35 min required by manual segmentation. In addition, repeatability experiments indicated that the intra-observer variability ranges from 6.5% to 7.5%, and the inter-observer variability is 8.5% in terms

  12. Acquisition and automated 3-D segmentation of respiratory/cardiac-gated PET transmission images

    SciTech Connect

    Reutter, B.W.; Klein, G.J.; Brennan, K.M.; Huesman, R.H. |

    1996-12-31

    To evaluate the impact of respiratory motion on attenuation correction of cardiac PET data, we acquired and automatically segmented gated transmission data for a dog breathing on its own under gas anesthesia. Data were acquired for 20 min on a CTI/Siemens ECAT EXACT HR (47-slice) scanner configured for 12 gates in a static study, Two respiratory gates were obtained using data from a pneumatic bellows placed around the dog`s chest, in conjunction with 6 cardiac gates from standard EKG gating. Both signals were directed to a LabVIEW-controlled Macintosh, which translated them into one of 12 gate addresses. The respiratory gating threshold was placed near end-expiration to acquire 6 cardiac-gated datasets at end-expiration and 6 cardiac-gated datasets during breaths. Breaths occurred about once every 10 sec and lasted about 1-1.5 sec. For each respiratory gate, data were summed over cardiac gates and torso and lung surfaces were segmented automatically using a differential 3-D edge detection algorithm. Three-dimensional visualizations showed that lung surfaces adjacent to the heart translated 9 mm inferiorly during breaths. Our results suggest that respiration-compensated attenuation correction is feasible with a modest amount of gated transmission data and is necessary for accurate quantitation of high-resolution gated cardiac PET data.

  13. 3D watershed-based segmentation of internal structures within MR brain images

    NASA Astrophysics Data System (ADS)

    Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2000-06-01

    In this paper an image-based method founded on mathematical morphology is presented in order to facilitate the segmentation of cerebral structures on 3D magnetic resonance images (MRIs). The segmentation is described as an immersion simulation, applied to the modified gradient image, modeled by a generated 3D region adjacency graph (RAG). The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions are identified in order to attribute an influence zone only to relevant minima of the image. This stage uses contrasted regions from morphological reconstruction and labeled flat regions constrained by the RAG. The goal of the decision stage is to precisely locate the contours of regions detected by the marker extraction. This decision is performed by a 3D extension of the watershed transform. Upon completion of the segmentation, the outcome of the preceding process is presented to the user for manual selection of the structures of interest (SOI). Results of this approach are described and illustrated with examples of segmented 3D MRIs of the human head.

  14. Supervised recursive segmentation of volumetric CT images for 3D reconstruction of lung and vessel tree.

    PubMed

    Li, Xuanping; Wang, Xue; Dai, Yixiang; Zhang, Pengbo

    2015-12-01

    Three dimensional reconstruction of lung and vessel tree has great significance to 3D observation and quantitative analysis for lung diseases. This paper presents non-sheltered 3D models of lung and vessel tree based on a supervised semi-3D lung tissues segmentation method. A recursive strategy based on geometric active contour is proposed instead of the "coarse-to-fine" framework in existing literature to extract lung tissues from the volumetric CT slices. In this model, the segmentation of the current slice is supervised by the result of the previous one slice due to the slight changes between adjacent slice of lung tissues. Through this mechanism, lung tissues in all the slices are segmented fast and accurately. The serious problems of left and right lungs fusion, caused by partial volume effects, and segmentation of pleural nodules can be settled meanwhile during the semi-3D process. The proposed scheme is evaluated by fifteen scans, from eight healthy participants and seven participants suffering from early-stage lung tumors. The results validate the good performance of the proposed method compared with the "coarse-to-fine" framework. The segmented datasets are utilized to reconstruct the non-sheltered 3D models of lung and vessel tree.

  15. 3D TEM reconstruction and segmentation process of laminar bio-nanocomposites

    SciTech Connect

    Iturrondobeitia, M. Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.

    2015-03-30

    The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V{sub clay} (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite.

  16. Segmentation of Blood Vessels and 3D Representation of CMR Image

    NASA Astrophysics Data System (ADS)

    Jiji, G. W.

    2013-06-01

    Current cardiac magnetic resonance imaging (CMR) technology allows the determination of patient-individual coronary tree structure, detection of infarctions, and assessment of myocardial perfusion. The purpose of this work is to segment heart blood vessels and visualize it in 3D. In this work, 3D visualisation of vessel was performed into four phases. The first step is to detect the tubular structures using multiscale medialness function, which distinguishes tube-like structures from and other structures. Second step is to extract the centrelines of the tubes. From the centreline radius the cylindrical tube model is constructed. The third step is segmentation of the tubular structures. The cylindrical tube model is used in segmentation process. Fourth step is to 3D representation of the tubular structure using Volume . The proposed approach is applied to 10 datasets of patients from the clinical routine and tested the results with radiologists.

  17. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  18. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  19. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  20. Automated segmentation of 3-D spectral OCT retinal blood vessels by neural canal opening false positive suppression.

    PubMed

    Hu, Zhihong; Niemeijer, Meindert; Abràmoft, Michael D; Lee, Kyungmoo; Garvin, Mona K

    2010-01-01

    We present a method for automatically segmenting the blood vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes, with a focus on the ability to segment the vessels in the region near the neural canal opening (NCO). The algorithm first pre-segments the NCO using a graph-theoretic approach. Oriented Gabor wavelets rotated around the center of the NCO are applied to extract features in a 2-D vessel-aimed projection image. Corresponding oriented NCO-based templates are utilized to help suppress the false positive tendency near the NCO boundary. The vessels are identified in a vessel-aimed projection image using a pixel classification algorithm. Based on the 2-D vessel profiles, 3-D vessel segmentation is performed by a triangular-mesh-based graph search approach in the SD-OCT volume. The segmentation method is trained on 5 and is tested on 10 randomly chosen independent ONH-centered SD-OCT volumes from 15 subjects with glaucoma. Using ROC analysis, for the 2-D vessel segmentation, we demonstrate an improvement over the closest previous work with an area under the curve (AUC) of 0.81 (0.72 for previously reported approach) for the region around the NCO and 0.84 for the region outside the NCO (0.81 for previously reported approach).

  1. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  2. Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization.

    PubMed

    Rathke, Fabian; Schmidt, Stefan; Schnörr, Christoph

    2014-07-01

    With the introduction of spectral-domain optical coherence tomography (OCT), resulting in a significant increase in acquisition speed, the fast and accurate segmentation of 3-D OCT scans has become evermore important. This paper presents a novel probabilistic approach, that models the appearance of retinal layers as well as the global shape variations of layer boundaries. Given an OCT scan, the full posterior distribution over segmentations is approximately inferred using a variational method enabling efficient probabilistic inference in terms of computationally tractable model components: Segmenting a full 3-D volume takes around a minute. Accurate segmentations demonstrate the benefit of using global shape regularization: We segmented 35 fovea-centered 3-D volumes with an average unsigned error of 2.46 ± 0.22 μm as well as 80 normal and 66 glaucomatous 2-D circular scans with errors of 2.92 ± 0.5 μm and 4.09 ± 0.98 μm respectively. Furthermore, we utilized the inferred posterior distribution to rate the quality of the segmentation, point out potentially erroneous regions and discriminate normal from pathological scans. No pre- or postprocessing was required and we used the same set of parameters for all data sets, underlining the robustness and out-of-the-box nature of our approach.

  3. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database.

  4. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    PubMed Central

    Yan, Yiming; Gao, Fengjiao; Deng, Shupei; Su, Nan

    2017-01-01

    In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM), which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images. PMID:28125018

  5. Segmentation of the central-chest lymph nodes in 3D MDCT images.

    PubMed

    Lu, Kongkuo; Higgins, William E

    2011-09-01

    Central-chest lymph nodes play a vital role in lung-cancer staging. The definition of lymph nodes from three-dimensional (3D) multidetector computed-tomography (MDCT) images, however, remains an open problem. We propose two methods for computer-based segmentation of the central-chest lymph nodes from a 3D MDCT scan: the single-section live wire and the single-click live wire. For the single-section live wire, the user first applies the standard live wire to a single two-dimensional (2D) section after which automated analysis completes the segmentation process. The single-click live wire is similar but is almost completely automatic. Ground-truth studies involving human 3D MDCT scans demonstrate the robustness, efficiency, and intra-observer and inter-observer reproducibility of the methods.

  6. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-21

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of [Formula: see text], yielding a mean Dice similarity coefficient of [Formula: see text], and an average symmetric surface distance of [Formula: see text] mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  7. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  8. From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data

    PubMed Central

    Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred

    2014-01-01

    Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data

  9. A Quantification of the 3D Modeling Capabilities of the Kinectfusion Algorithm

    DTIC Science & Technology

    2014-03-27

    A QUANTIFICATION OF THE 3D MODELING CAPABILITIES OF THE KINECTFUSTION ALGORITHM THESIS Jeremy M. Higbee, Captain, USAF AFIT-ENG-14-M-40 DEPARTMENT OF...subject to copyright protection in the United States. AFIT-ENG-14-M-40 A QUANTIFICATION OF THE 3D MODELING CAPABILITIES OF THE KINECTFUSTION ALGORITHM...M-40 A QUANTIFICATION OF THE 3D MODELING CAPABILITIES OF THE KINECTFUSTION ALGORITHM Jeremy M. Higbee, BS Captain, USAF Approved: /signed/ Maj Brian

  10. SU-E-T-356: Efficient Segmentation of Flattening Filter Free Photon Beamsfor 3D-Conformal SBRT Treatment Planning

    SciTech Connect

    Barbiere, J; Beninati, G; Ndlovu, A

    2015-06-15

    Purpose: It has been argued that a 3D-conformal technique (3DCRT) is suitable for SBRT due to its simplicity for non-coplanar planning and delivery. It has also been hypothesized that a high dose delivered in a short time can enhance indirect cell death due to vascular damage as well as limiting intrafraction motion. Flattening Filter Free (FFF) photon beams are ideal for high dose rate treatment but their conical profiles are not ideal for 3DCRT. The purpose of our work is to present a method to efficiently segment an FFF beam for standard 3DCRT planning. Methods: A 10×10 cm Varian True Beam 6X FFF beam profile was analyzed using segmentation theory to determine the optimum segmentation intensity required to create an 8 cm uniform dose profile. Two segments were automatically created in sequence with a Varian Eclipse treatment planning system by converting isodoses corresponding to the calculated segmentation intensity to contours and applying the “fit and shield” tool. All segments were then added to the FFF beam to create a single merged field. Field blocking can be incorporated but was not used for clarity. Results: Calculation of the segmentation intensity using an algorithm originally proposed by Xia and Verhey indicated that each segment should extend to the 92% isodose. The original FFF beam with 100% at the isocenter at a depth of 10 cm was reduced to 80% at 4cm from the isocenter; the segmented beam had +/−2.5 % uniformity up to 4.4cm from the isocenter. An additional benefit of our method is a 50% decrease in the 80%-20% penumbra of 0.6cm compared to 1.2cm in the original FFF beam. Conclusion: Creation of two optimum segments can flatten a FFF beam and also reduce its penumbra for clinical 3DCRT SBRT treatment.

  11. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    SciTech Connect

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  12. Framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms in conjunction with 3D landmark localization and registration

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Hoegen, Philipp; Liao, Wei; Müller-Eschner, Matthias; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik; Rohr, Karl

    2016-03-01

    We introduce a framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms. Phantoms are designed using a CAD system and created with a 3D printer, and comprise realistic shapes including branches and pathologies such as abdominal aortic aneurysms (AAA). To transfer ground truth information to the 3D image coordinate system, we use a landmark-based registration scheme utilizing fiducial markers integrated in the phantom design. For accurate 3D localization of the markers we developed a novel 3D parametric intensity model that is directly fitted to the markers in the images. We also performed a quantitative evaluation of different vessel segmentation approaches for a phantom of an AAA.

  13. Interactive 3D segmentation of the prostate in magnetic resonance images using shape and local appearance similarity analysis

    NASA Astrophysics Data System (ADS)

    Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.

    2013-03-01

    3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.

  14. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  15. 3D geometric split-merge segmentation of brain MRI datasets.

    PubMed

    Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis

    2014-05-01

    In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods.

  16. Automatic 2D and 3D segmentation of liver from Computerised Tomography

    NASA Astrophysics Data System (ADS)

    Evans, Alun

    As part of the diagnosis of liver disease, a Computerised Tomography (CT) scan is taken of the patient, which the clinician then uses for assistance in determining the presence and extent of the disease. This thesis presents the background, methodology, results and future work of a project that employs automated methods to segment liver tissue. The clinical motivation behind this work is the desire to facilitate the diagnosis of liver disease such as cirrhosis or cancer, assist in volume determination for liver transplantation, and possibly assist in measuring the effect of any treatment given to the liver. Previous attempts at automatic segmentation of liver tissue have relied on 2D, low-level segmentation techniques, such as thresholding and mathematical morphology, to obtain the basic liver structure. The derived boundary can then be smoothed or refined using more advanced methods. The 2D results presented in this thesis improve greatly on this previous work by using a topology adaptive active contour model to accurately segment liver tissue from CT images. The use of conventional snakes for liver segmentation is difficult due to the presence of other organs closely surrounding the liver this new technique avoids this problem by adding an inflationary force to the basic snake equation, and initialising the snake inside the liver. The concepts underlying the 2D technique are extended to 3D, and results of full 3D segmentation of the liver are presented. The 3D technique makes use of an inflationary active surface model which is adaptively reparameterised, according to its size and local curvature, in order that it may more accurately segment the organ. Statistical analysis of the accuracy of the segmentation is presented for 18 healthy liver datasets, and results of the segmentation of unhealthy livers are also shown. The novel work developed during the course of this project has possibilities for use in other areas of medical imaging research, for example the

  17. Quantum Lattice Algorithms for 2D and 3D Magnetohydrodynamics

    DTIC Science & Technology

    2007-11-01

    Vahala (William & Mary) on both quantum and entropic lattice algorithms for the solution of nonlinear physics problems. Because of the extreme...for CAP-Phase II on the 9000 core on the SGI-Altix at ASC. 15. SUBJECT TERMS Nonlinear Physics; Quantum Lattice Algorithms; Entropic Lattice...solution of nonlinear physics problems. Because of the extreme scalability of the algorithms that we have been developing, we were chosen for CAP

  18. Fully automatic segmentation of the mitral leaflets in 3D transesophageal echocardiographic images using multi-atlas joint label fusion and deformable medial modeling.

    PubMed

    Pouch, A M; Wang, H; Takabe, M; Jackson, B M; Gorman, J H; Gorman, R C; Yushkevich, P A; Sehgal, C M

    2014-01-01

    Comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease. Real-time 3D transesophageal echocardiography (3D TEE) is a practical, highly informative imaging modality for examining the mitral valve in a clinical setting. To facilitate visual and quantitative 3D TEE image analysis, we describe a fully automated method for segmenting the mitral leaflets in 3D TEE image data. The algorithm integrates complementary probabilistic segmentation and shape modeling techniques (multi-atlas joint label fusion and deformable modeling with continuous medial representation) to automatically generate 3D geometric models of the mitral leaflets from 3D TEE image data. These models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically, as structures with locally varying thickness. In this work, expert image analysis is the gold standard for evaluating automatic segmentation. Without any user interaction, we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3D TEE data acquired from a mixed population of subjects with normal valve morphology and mitral valve disease.

  19. Depth map coding using residual segmentation for 3D video system

    NASA Astrophysics Data System (ADS)

    Lee, Cheon; Ho, Yo-Sung

    2013-06-01

    Advanced 3D video systems employ multi-view video-plus-depth data to support the free-viewpoint navigation and comfortable 3D viewing; thus efficient depth map coding becomes an important issue. Unlike the color image, the depth map has a property that depth values of the inner part of an object are monotonic, but those of object boundaries change abruptly. Therefore, residual data generated by prediction errors around object boundaries consume many bits in depth map coding. Representing them with segment data can be better than the use of the conventional transformation around the boundary regions. In this paper, we propose an efficient depth map coding method using a residual segmentation instead of using transformation. The proposed residual segmentation divides residual data into two regions with a segment map and two mean values. If the encoder selects the proposed method in terms of rates, two quantized mean values and an index of the segment map are transmitted. Simulation results show significant gains of up to 10 dB compared to the state-of-the-art coders, such as JPEG2000 and H.264/AVC. [Figure not available: see fulltext.

  20. A shape prior-based MRF model for 3D masseter muscle segmentation

    NASA Astrophysics Data System (ADS)

    Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe

    2012-02-01

    Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.

  1. Automatic hip cartilage segmentation from 3D MR images using arc-weighted graph searching.

    PubMed

    Xia, Ying; Chandra, Shekhar S; Engstrom, Craig; Strudwick, Mark W; Crozier, Stuart; Fripp, Jurgen

    2014-12-07

    Accurate segmentation of hip joint cartilage from magnetic resonance (MR) images offers opportunities for quantitative investigations of pathoanatomical conditions such as osteoarthritis. In this paper, we present a fully automatic scheme for the segmentation of the individual femoral and acetabular cartilage plates in the human hip joint from high-resolution 3D MR images. The developed scheme uses an improved optimal multi-object multi-surface graph search framework with an arc-weighted graph representation that incorporates prior morphological knowledge as a basis for segmentation of the individual femoral and acetabular cartilage plates despite weak or incomplete boundary interfaces. This automated scheme was validated against manual segmentations from 3D true fast imaging with steady-state precession (TrueFISP) MR examinations of the right hip joints in 52 asymptomatic volunteers. Compared with expert manual segmentations of the combined, femoral and acetabular cartilage volumes, the automatic scheme obtained mean (± standard deviation) Dice's similarity coefficients of 0.81 (± 0.03), 0.79 (± 0.03) and 0.72 (± 0.05). The corresponding mean absolute volume difference errors were 8.44% (± 6.36), 9.44% (± 7.19) and 9.05% (± 8.02). The mean absolute differences between manual and automated measures of cartilage thickness for femoral and acetabular cartilage plates were 0.13 mm (± 0.12) and 0.11 mm (± 0.11), respectively.

  2. Biview Learning for Human Posture Segmentation from 3D Points Cloud

    PubMed Central

    Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng

    2014-01-01

    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. PMID:24465721

  3. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  4. Biview learning for human posture segmentation from 3D points cloud.

    PubMed

    Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng

    2014-01-01

    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation.

  5. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  6. A novel Hessian based algorithm for rat kidney glomerulus detection in 3D MRI

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wu, Teresa; Bennett, Kevin M.

    2015-03-01

    The glomeruli of the kidney perform the key role of blood filtration and the number of glomeruli in a kidney is correlated with susceptibility to chronic kidney disease and chronic cardiovascular disease. This motivates the development of new technology using magnetic resonance imaging (MRI) to measure the number of glomeruli and nephrons in vivo. However, there is currently a lack of computationally efficient techniques to perform fast, reliable and accurate counts of glomeruli in MR images due to the issues inherent in MRI, such as acquisition noise, partial volume effects (the mixture of several tissue signals in a voxel) and bias field (spatial intensity inhomogeneity). Such challenges are particularly severe because the glomeruli are very small, (in our case, a MRI image is ~16 million voxels, each glomerulus is in the size of 8~20 voxels), and the number of glomeruli is very large. To address this, we have developed an efficient Hessian based Difference of Gaussians (HDoG) detector to identify the glomeruli on 3D rat MR images. The image is first smoothed via DoG followed by the Hessian process to pre-segment and delineate the boundary of the glomerulus candidates. This then provides a basis to extract regional features used in an unsupervised clustering algorithm, completing segmentation by removing the false identifications occurred in the pre-segmentation. The experimental results show that Hessian based DoG has the potential to automatically detect glomeruli,from MRI in 3D, enabling new measurements of renal microstructure and pathology in preclinical and clinical studies.

  7. New algorithms to map asymmetries of 3D surfaces.

    PubMed

    Combès, Benoît; Prima, Sylvain

    2008-01-01

    In this paper, we propose a set of new generic automated processing tools to characterise the local asymmetries of anatomical structures (represented by surfaces) at an individual level, and within/between populations. The building bricks of this toolbox are: (1) a new algorithm for robust, accurate, and fast estimation of the symmetry plane of grossly symmetrical surfaces, and (2) a new algorithm for the fast, dense, nonlinear matching of surfaces. This last algorithm is used both to compute dense individual asymmetry maps on surfaces, and to register these maps to a common template for population studies. We show these two algorithms to be mathematically well-grounded, and provide some validation experiments. Then we propose a pipeline for the statistical evaluation of local asymmetries within and between populations. Finally we present some results on real data.

  8. 3D radiative transfer in colliding wind binaries: Application of the SimpleX algorithm to 3D SPH simulations

    NASA Astrophysics Data System (ADS)

    Madura, Thomas; Clementel, Nicola; Kruip, Chael; Icke, Vincent; Gull, Theodore

    2014-09-01

    We present the first results of full 3D radiative transfer simulations of the colliding stellar winds in a massive binary system. We accomplish this by applying the SIMPLEX algorithm for 3D radiative transfer on an unstructured Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the colliding winds in the binary system η Carinae. We use SIMPLEX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We show how the SIMPLEX simulations can be used to generate synthetic spectral data cubes for comparison to data obtained with the Hubble Space Telescope (HST)/Space Telescope Imaging Spectrograph as part of a multi-cycle program to map changes in η Car's extended interacting wind structures across one binary cycle. Comparison of the HST observations to the SIMPLEX models can help lead to more accurate constraints on the orbital, stellar, and wind parameters of the η Car system, such as the primary's mass-loss rate and the companion's temperature and luminosity. While we initially focus specifically on the η Car binary, the numerical methods employed can be applied to numerous other colliding wind (WR140, WR137, WR19) and dusty 'pinwheel' (WR104, WR98a) binary systems. One of the biggest remaining mysteries is how dust can form and survive in such systems that contain a hot, luminous O star. Coupled with 3D hydrodynamical simulations, SIMPLEX simulations have the potential to help determine the regions where dust can form and survive in these unique objects.

  9. 3D/2D registration and segmentation of scoliotic vertebrae using statistical models.

    PubMed

    Benameur, Said; Mignotte, Max; Parent, Stefan; Labelle, Hubert; Skalli, Wafa; de Guise, Jacques

    2003-01-01

    We propose a new 3D/2D registration method for vertebrae of the scoliotic spine, using two conventional radiographic views (postero-anterior and lateral), and a priori global knowledge of the geometric structure of each vertebra. This geometric knowledge is efficiently captured by a statistical deformable template integrating a set of admissible deformations, expressed by the first modes of variation in Karhunen-Loeve expansion, of the pathological deformations observed on a representative scoliotic vertebra population. The proposed registration method consists of fitting the projections of this deformable template with the preliminary segmented contours of the corresponding vertebra on the two radiographic views. The 3D/2D registration problem is stated as the minimization of a cost function for each vertebra and solved with a gradient descent technique. Registration of the spine is then done vertebra by vertebra. The proposed method efficiently provides accurate 3D reconstruction of each scoliotic vertebra and, consequently, it also provides accurate knowledge of the 3D structure of the whole scoliotic spine. This registration method has been successfully tested on several biplanar radiographic images and validated on 57 scoliotic vertebrae. The validation results reported in this paper demonstrate that the proposed statistical scheme performs better than other conventional 3D reconstruction methods.

  10. 3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models

    PubMed Central

    Khalifa, Fahmi; Soliman, Ahmed; Gimel'farb, Georgy

    2017-01-01

    Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images' inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels' appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach. PMID:28280519

  11. Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors

    PubMed Central

    Ortiz, Andrés; Palacio, Antonio A.; Górriz, Juan M.; Ramírez, Javier; Salas-González, Diego

    2013-01-01

    Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD) systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE) and has been assessed using real brain images from the Internet Brain Image Repository (IBSR). PMID:23762192

  12. Robust 3-D airway tree segmentation for image-guided peripheral bronchoscopy.

    PubMed

    Graham, Michael W; Gibbs, Jason D; Cornish, Duane C; Higgins, William E

    2010-04-01

    A vital task in the planning of peripheral bronchoscopy is the segmentation of the airway tree from a 3-D multidetector computed tomography chest scan. Unfortunately, existing methods typically do not sufficiently extract the necessary peripheral airways needed to plan a procedure. We present a robust method that draws upon both local and global information. The method begins with a conservative segmentation of the major airways. Follow-on stages then exhaustively search for additional candidate airway locations. Finally, a graph-based optimization method counterbalances both the benefit and cost of retaining candidate airway locations for the final segmentation. Results demonstrate that the proposed method typically extracts 2-3 more generations of airways than several other methods, and that the extracted airway trees enable image-guided bronchoscopy deeper into the human lung periphery than past studies.

  13. Vector algorithms for geometrically nonlinear 3D finite element analysis

    NASA Technical Reports Server (NTRS)

    Whitcomb, John D.

    1989-01-01

    Algorithms for geometrically nonlinear finite element analysis are presented which exploit the vector processing capability of the VPS-32, which is closely related to the CYBER 205. By manipulating vectors (which are long lists of numbers) rather than individual numbers, very high processing speeds are obtained. Long vector lengths are obtained without extensive replication or reordering by storage of intermediate results in strategic patterns at all stages of the computations. Comparisons of execution times with those from programs using either scalar or other vector programming techniques indicate that the algorithms presented are quite efficient.

  14. The effect of pose variability and repeated reliability of segmental centres of mass acquisition when using 3D photonic scanning.

    PubMed

    Chiu, Chuang-Yuan; Pease, David L; Sanders, Ross H

    2016-12-01

    Three-dimensional (3D) photonic scanning is an emerging technique to acquire accurate body segment parameter data. This study established the repeated reliability of segmental centres of mass when using 3D photonic scanning (3DPS). Seventeen male participants were scanned twice by a 3D whole-body laser scanner. The same operators conducted the reconstruction and segmentation processes to obtain segmental meshes for calculating the segmental centres of mass. The segmental centres of mass obtained from repeated 3DPS were compared by relative technical error of measurement (TEM). Hypothesis tests were conducted to determine the size of change required for each segment to be determined a true variation. The relative TEMs for all segments were less than 5%. The relative changes in centres of mass at ±1.5% for most segments can be detected (p < 0.05). The arm segments which are difficult to keep in the same scanning pose generated more error than other segments. Practitioner Summary: Three-dimensional photonic scanning is an emerging technique to acquire body segment parameter data. This study established the repeated reliability of segmental centres of mass when using 3D photonic scanning and emphasised that the error for arm segments need to be considered while using this technique to acquire centres of mass.

  15. Spectral clustering algorithms for ultrasound image segmentation.

    PubMed

    Archip, Neculai; Rohling, Robert; Cooperberg, Peter; Tahmasebpour, Hamid; Warfield, Simon K

    2005-01-01

    Image segmentation algorithms derived from spectral clustering analysis rely on the eigenvectors of the Laplacian of a weighted graph obtained from the image. The NCut criterion was previously used for image segmentation in supervised manner. We derive a new strategy for unsupervised image segmentation. This article describes an initial investigation to determine the suitability of such segmentation techniques for ultrasound images. The extension of the NCut technique to the unsupervised clustering is first described. The novel segmentation algorithm is then performed on simulated ultrasound images. Tests are also performed on abdominal and fetal images with the segmentation results compared to manual segmentation. Comparisons with the classical NCut algorithm are also presented. Finally, segmentation results on other types of medical images are shown.

  16. Left Ventricular Myocardial Segmentation in 3-D Ultrasound Recordings: Effect of Different Endocardial and Epicardial Coupling Strategies.

    PubMed

    Pedrosa, Joao; Barbosa, Daniel; Heyde, Brecht; Schnell, Frederic; Rosner, Assami; Claus, Piet; D'hooge, Jan

    2017-03-01

    Cardiac volume/function assessment remains a critical step in daily cardiology, and 3-D ultrasound plays an increasingly important role. Though development of automatic endocardial segmentation methods has received much attention, the same cannot be said about epicardial segmentation, in spite of the importance of full myocardial segmentation. In this paper, different ways of coupling the endocardial and epicardial segmentations are contrasted and compared with uncoupled segmentation. For this purpose, the B-spline explicit active surfaces framework was used; 27 3-D echocardiographic images were used to validate the different coupling strategies, which were compared with manual contouring of the endocardial and epicardial borders performed by an expert. It is shown that an independent segmentation of the endocardium followed by an epicardial segmentation coupled to the endocardium is the most advantageous. In this way, a framework for fully automatic 3-D myocardial segmentation is proposed using a novel coupling strategy.

  17. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  18. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  19. Focused shape models for hip joint segmentation in 3D magnetic resonance images.

    PubMed

    Chandra, Shekhar S; Xia, Ying; Engstrom, Craig; Crozier, Stuart; Schwarz, Raphael; Fripp, Jurgen

    2014-04-01

    Deformable models incorporating shape priors have proved to be a successful approach in segmenting anatomical regions and specific structures in medical images. This paper introduces weighted shape priors for deformable models in the context of 3D magnetic resonance (MR) image segmentation of the bony elements of the human hip joint. The fully automated approach allows the focusing of the shape model energy to a priori selected anatomical structures or regions of clinical interest by preferentially ordering the shape representation (or eigen-modes) within this type of model to the highly weighted areas. This focused shape model improves accuracy of the shape constraints in those regions compared to standard approaches. The proposed method achieved femoral head and acetabular bone segmentation mean absolute surface distance errors of 0.55±0.18mm and 0.75±0.20mm respectively in 35 3D unilateral MR datasets from 25 subjects acquired at 3T with different limited field of views for individual bony components of the hip joint.

  20. Parallel graph search: application to intraretinal layer segmentation of 3D macular OCT scans

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2012-02-01

    Image segmentation is of paramount importance for quantitative analysis of medical image data. Recently, a 3-D graph search method which can detect globally optimal interacting surfaces with respect to the cost function of volumetric images has been introduced, and its utility demonstrated in several application areas. Although the method provides excellent segmentation accuracy, its limitation is a slow processing speed when many surfaces are simultaneously segmented in large volumetric datasets. Here, we propose a novel method of parallel graph search, which overcomes the limitation and allows the quick detection of multiple surfaces. To demonstrate the obtained performance with respect to segmentation accuracy and processing speedup, the new approach was applied to retinal optical coherence tomography (OCT) image data and compared with the performance of the former non-parallel method. Our parallel graph search methods for single and double surface detection are approximately 267 and 181 times faster than the original graph search approach in 5 macular OCT volumes (200 x 5 x 1024 voxels) acquired from the right eyes of 5 normal subjects. The resulting segmentation differences were small as demonstrated by the mean unsigned differences between the non-parallel and parallel methods of 0.0 +/- 0.0 voxels (0.0 +/- 0.0 μm) and 0.27 +/- 0.34 voxels (0.53 +/- 0.66 μm) for the single- and dual-surface approaches, respectively.

  1. Fully automated prostate segmentation in 3D MR based on normalized gradient fields cross-correlation initialization and LOGISMOS refinement

    NASA Astrophysics Data System (ADS)

    Yin, Yin; Fotin, Sergei V.; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter

    2012-02-01

    Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape. Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The refinement model is based on a graph-search based framework, which contains both shape and topology information during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and region-specific classifier training. The proposed algorithm was developed using 261 training images and tested on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40 seconds depending on image size and resolution.

  2. A modular segmented-flow platform for 3D cell cultivation.

    PubMed

    Lemke, Karen; Förster, Tobias; Römer, Robert; Quade, Mandy; Wiedemeier, Stefan; Grodrian, Andreas; Gastrock, Gunter

    2015-07-10

    In vitro 3D cell cultivation is promised to equate tissue in vivo more realistically than 2D cell cultivation corresponding to cell-cell and cell-matrix interactions. Therefore, a scalable 3D cultivation platform was developed. This platform, called pipe-based bioreactors (pbb), is based on the segmented-flow technology: aqueous droplets are embedded in a water-immiscible carrier fluid. The droplet volumes range from 60 nL to 20 μL and are used as bioreactors lined up in a tubing like pearls on a string. The modular automated platform basically consists of several modules like a fluid management for a high throughput droplet generation for self-assembly or scaffold-based 3D cell cultivation, a storage module for incubation and storage, and an analysis module for monitoring cell aggregation and proliferation basing on microscopy or photometry. In this report, the self-assembly of murine embryonic stem cells (mESCs) to uniformly sized embryoid bodies (EBs), the cell proliferation, the cell viability as well as the influence on the cell differentiation to cardiomyocytes are described. The integration of a dosage module for medium exchange or agent addition will enable pbb as long-term 3D cell cultivation system for studying stem cell differentiation, e.g. cardiac myogenesis or for diagnostic and therapeutic testing in personalized medicine.

  3. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  4. Semantic segmentation of 3D textured meshes for urban scene analysis

    NASA Astrophysics Data System (ADS)

    Rouhani, Mohammad; Lafarge, Florent; Alliez, Pierre

    2017-01-01

    Classifying 3D measurement data has become a core problem in photogrammetry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and to account for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework.

  5. 3-D segmentation and quantitative analysis of inner and outer walls of thrombotic abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Yin, Yin; Wahle, Andreas; Olszewski, Mark E.; Sonka, Milan

    2008-03-01

    An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and determination of treatment options. Until now, only a small number of methods for thrombus segmentation and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image ambiguity occur, in which case several control points are used to guide the computer segmentation without the need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 +/- 2.71 mouse clicks per case / 0.083 +/- 0.035 mouse clicks per image slice were required in the remaining 5 datasets.

  6. Binary 3D image interpolation algorithm based global information and adaptive curves fitting

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng

    2013-08-01

    Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.

  7. A 3-D Computational Study of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) Spanwise Segment

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Nguyen, Nhan T.

    2015-01-01

    Results of a computational study carried out to explore the effects of various elastomer configurations joining spanwise contiguous Variable Camber Continuous Trailing Edge Flap (VCCTEF) segments are reported here. This research is carried out as a proof-of-concept study that will seek to push the flight envelope in cruise with drag optimization as the objective. The cruise conditions can be well off design such as caused by environmental conditions, maneuvering, etc. To handle these off-design conditions, flap deflection is used so when the flap is deflected in a given direction, the aircraft angle of attack changes accordingly to maintain a given lift. The angle of attack is also a design parameter along with the flap deflection. In a previous 2D study,1 the effect of camber was investigated and the results revealed some insight into the relative merit of various camber settings of the VCCTEF. The present state of the art has not advanced sufficiently to do a full 3-D viscous analysis of the whole NASA Generic Transport Model (GTM) wing with VCCTEF deployed with elastomers. Therefore, this study seeks to explore the local effects of three contiguous flap segments on lift and drag of a model devised here to determine possible trades among various flap deflections to achieve desired lift and drag results. Although this approach is an approximation, it provides new insights into the "local" effects of the relative deflections of the contiguous spanwise flap systems and various elastomer segment configurations. The present study is a natural extension of the 2-D study to assess these local 3-D effects. Design cruise condition at 36,000 feet at free stream Mach number of 0.797 and a mean aerodynamic chord (MAC) based Reynolds number of 30.734x10(exp 6) is simulated for an angle of attack (AoA) range of 0 to 6 deg. In the previous 2-D study, the calculations revealed that the parabolic arc camber (1x2x3) and circular arc camber (VCCTEF222) offered the best L

  8. Layout consistent segmentation of 3-D meshes via conditional random fields and spatial ordering constraints.

    PubMed

    Zouhar, Alexander; Baloch, Sajjad; Tsin, Yanghai; Fang, Tong; Fuchs, Siegfried

    2010-01-01

    We address the problem of 3-D Mesh segmentation for categories of objects with known part structure. Part labels are derived from a semantic interpretation of non-overlapping subsurfaces. Our approach models the label distribution using a Conditional Random Field (CRF) that imposes constraints on the relative spatial arrangement of neighboring labels, thereby ensuring semantic consistency. To this end, each label variable is associated with a rich shape descriptor that is intrinsic to the surface. Randomized decision trees and cross validation are employed for learning the model, which is eventually applied using graph cuts. The method is flexible enough for segmenting even geometrically less structured regions and is robust to local and global shape variations.

  9. [3-D endocardial surface modelling based on the convex hull algorithm].

    PubMed

    Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong

    2006-11-01

    In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK.

  10. Algorithms for calculating detector efficiency normalization coefficients for true coincidences in 3D PET

    NASA Astrophysics Data System (ADS)

    Badawi, R. D.; Lodge, M. A.; Marsden, P. K.

    1998-01-01

    Accurate normalization of lines of response in 3D PET is a prerequisite for quantitative reconstruction. Most current methods are component based, calculating a series of geometric and intrinsic detector efficiency factors. We have reviewed the theory behind several existing algorithms for calculating detector efficiency factors in 2D and 3D PET, and have extended them to create a range of new algorithms. Three of the algorithms described are `fully 3D' in that they make use of data from all detector rings for the calculation of the efficiencies of any one line of response. We have assessed the performance of the new and existing methods using simulated and real data, and have demonstrated that the fully 3D algorithms allow the rapid acquisition of crystal efficiency normalization data using low-activity sources. Such methods enable the use of scatter-free scanning line sources or the use of very short acquisitions of cylindrical sources for routine normalization.

  11. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  12. Visualising, segmenting and analysing heterogenous glacigenic sediments using 3D x-ray CT.

    NASA Astrophysics Data System (ADS)

    Carr, Simon; Diggens, Lucy; Groves, John; O'Sullivan, Catherine; Marsland, Rhona

    2015-04-01

    , especially with regard to using such data to improve understanding of mechanisms of particle motion and fabric development during subglacial strain. In this study, we present detailed investigation of subglacial tills from the UK, Iceland and Poland, to explore the challenges in segmenting these highly variable sediment bodies for 3D microfabric analysis. A calibration study is reported to compare various approaches to CT data segmentation to manually segmented datasets, from which an optimal workflow is developed, using a combination of the WEKA Trainable Segmentation tool within ImageJ to segment the data, followed by object-based analysis using Blob3D. We then demonstrate the value of this analysis through the analysis of true 3D microfabric data from a Last Glacial Maximum till deposit located at Morston, North Norfolk. Seven undisturbed sediment samples were scanned and analysed using high-resolution 3D X-ray computed tomography. Large (~5,000 to ~16,000) populations of individual particles are objectively and systematically segmented and identified. These large datasets are then subject to detailed interrogation using bespoke code for analysing particle fabric within Matlab, including the application of fabric-tensor analysis, by which fabrics can be weighted and scaled by key variables such as size and shape. We will present initial findings from these datasets, focusing particularly on overcoming the methodological challenges of obtaining robust datasets of sediments with highly complex, mixed compositional sediments.

  13. Surface modeling and segmentation of the 3D airway wall in MSCT

    NASA Astrophysics Data System (ADS)

    Ortner, Margarete; Fetita, Catalin; Brillet, Pierre-Yves; Pr"teux, Françoise; Grenier, Philippe

    2011-03-01

    Airway wall remodeling in asthma and chronic obstructive pulmonary disease (COPD) is a well-known indicator of the pathology. In this context, current clinical studies aim for establishing the relationship between the airway morphological structure and its function. Multislice computed tomography (MSCT) allows morphometric assessment of airways, but requires dedicated segmentation tools for clinical exploitation. While most of the existing tools are limited to cross-section measurements, this paper develops a fully 3D approach for airway wall segmentation. Such approach relies on a deformable model which is built up as a patient-specific surface model at the level of the airway lumen and deformed to reach the outer surface of the airway wall. The deformation dynamics obey a force equilibrium in a Lagrangian framework constrained by a vector field which avoids model self-intersections. The segmentation result allows a dense quantitative investigation of the airway wall thickness with a deeper insight at bronchus subdivisions than classic cross-section methods. The developed approach has been assessed both by visual inspection of 2D cross-sections, performed by two experienced radiologists on clinical data obtained with various protocols, and by using a simulated ground truth (pulmonary CT image model). The results confirmed a robust segmentation in intra-pulmonary regions with an error in the range of the MSCT image resolution and underlined the interest of the volumetric approach versus purely 2D methods.

  14. 3D variational brain tumor segmentation on a clustered feature set

    NASA Astrophysics Data System (ADS)

    Popuri, Karteek; Cobzas, Dana; Jagersand, Martin; Shah, Sirish L.; Murtha, Albert

    2009-02-01

    Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.

  15. Lung lobe segmentation by graph search with 3D shape constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Hoffman, Eric A.; Reinhardt, Joseph M.

    2001-05-01

    The lung lobes are natural units for reporting image-based measurements of the respiratory system. Lobar segmentation can also be used in pulmonary image processing to guide registration and drive additional segmentation. We have developed a 3D shape-constrained lobar segmentation technique for volumetric pulmonary CT images. The method consists of a search engine and shape constraints that work together to detect lobar fissures using gray level information and anatomic shape characteristics in two steps: (1) a coarse localization step, (2) a fine tuning step. An error detecting mechanism using shape constraints is used in our method to correct erroneous search results. Our method has been tested in four subjects, and the results are compared to manually traced results. The average RMS difference between the manual results and shape-constrained segmentation results is 2.23 mm. We further validated our method by evaluating the repeatability of lobar volumes measured from repeat scans of the same subject. We compared lobar air and tissue volume variations to show that most of the lobar volume variations are due to difference in air volume scan to scan.

  16. TU-F-BRF-06: 3D Pancreas MRI Segmentation Using Dictionary Learning and Manifold Clustering

    SciTech Connect

    Gou, S; Rapacchi, S; Hu, P; Sheng, K

    2014-06-15

    Purpose: The recent advent of MRI guided radiotherapy machines has lent an exciting platform for soft tissue target localization during treatment. However, tools to efficiently utilize MRI images for such purpose have not been developed. Specifically, to efficiently quantify the organ motion, we develop an automated segmentation method using dictionary learning and manifold clustering (DLMC). Methods: Fast 3D HASTE and VIBE MR images of 2 healthy volunteers and 3 patients were acquired. A bounding box was defined to include pancreas and surrounding normal organs including the liver, duodenum and stomach. The first slice of the MRI was used for dictionary learning based on mean-shift clustering and K-SVD sparse representation. Subsequent images were iteratively reconstructed until the error is less than a preset threshold. The preliminarily segmentation was subject to the constraints of manifold clustering. The segmentation results were compared with the mean shift merging (MSM), level set (LS) and manual segmentation methods. Results: DLMC resulted in consistently higher accuracy and robustness than comparing methods. Using manual contours as the ground truth, the mean Dices indices for all subjects are 0.54, 0.56 and 0.67 for MSM, LS and DLMC, respectively based on the HASTE image. The mean Dices indices are 0.70, 0.77 and 0.79 for the three methods based on VIBE images. DLMC is clearly more robust on the patients with the diseased pancreas while LS and MSM tend to over-segment the pancreas. DLMC also achieved higher sensitivity (0.80) and specificity (0.99) combining both imaging techniques. LS achieved equivalent sensitivity on VIBE images but was more computationally inefficient. Conclusion: We showed that pancreas and surrounding normal organs can be reliably segmented based on fast MRI using DLMC. This method will facilitate both planning volume definition and imaging guidance during treatment.

  17. Elastic model-based segmentation of 3-D neuroradiological data sets.

    PubMed

    Kelemen, A; Székely, G; Gerig, G

    1999-10-01

    This paper presents a new technique for the automatic model-based segmentation of three-dimensional (3-D) objects from volumetric image data. The development closely follows the seminal work of Taylor and Cootes on active shape models, but is based on a hierarchical parametric object description rather than a point distribution model. The segmentation system includes both the building of statistical models and the automatic segmentation of new image data sets via a restricted elastic deformation of shape models. Geometric models are derived from a sample set of image data which have been segmented by experts. The surfaces of these binary objects are converted into parametric surface representations, which are normalized to get an invariant object-centered coordinate system. Surface representations are expanded into series of spherical harmonics which provide parametric descriptions of object shapes. It is shown that invariant object surface parametrization provides a good approximation to automatically determine object homology in terms of sets of corresponding sets of surface points. Gray-level information near object boundaries is represented by 1-D intensity profiles normal to the surface. Considering automatic segmentation of brain structures as our driving application, our choice of coordinates for object alignment was the well-accepted stereotactic coordinate system. Major variation of object shapes around the mean shape, also referred to as shape eigenmodes, are calculated in shape parameter space rather than the feature space of point coordinates. Segmentation makes use of the object shape statistics by restricting possible elastic deformations into the range of the training shapes. The mean shapes are initialized in a new data set by specifying the landmarks of the stereotactic coordinate system. The model elastically deforms, driven by the displacement forces across the object's surface, which are generated by matching local intensity profiles. Elastic

  18. Spline-based deforming ellipsoids for interactive 3D bioimage segmentation.

    PubMed

    Delgado-Gonzalo, Ricard; Chenouard, Nicolas; Unser, Michael

    2013-10-01

    We present a new fast active-contour model (a.k.a. snake) for image segmentation in 3D microscopy. We introduce a parametric design that relies on exponential B-spline bases and allows us to build snakes that are able to reproduce ellipsoids. We design our bases to have the shortest-possible support, subject to some constraints. Thus, computational efficiency is maximized. The proposed 3D snake can approximate blob-like objects with good accuracy and can perfectly reproduce spheres and ellipsoids, irrespective of their position and orientation. The optimization process is remarkably fast due to the use of Gauss' theorem within our energy computation scheme. Our technique yields successful segmentation results, even for challenging data where object contours are not well defined. This is due to our parametric approach that allows one to favor prior shapes. In addition, this paper provides a software that gives full control over the snakes via an intuitive manipulation of few control points.

  19. 3D liver segmentation using multiple region appearances and graph cuts

    SciTech Connect

    Peng, Jialin Zhang, Hongbo; Hu, Peijun; Lu, Fang; Kong, Dexing; Peng, Zhiyi

    2015-12-15

    Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiple subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets

  20. Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Zhang, X.; Anderson, D. D.; Brown, T. D.; Hofwegen, C. Van; Sonka, M.

    2009-02-01

    We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and

  1. Pancreas segmentation from 3D abdominal CT images using patient-specific weighted subspatial probabilistic atlases

    NASA Astrophysics Data System (ADS)

    Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku

    2015-03-01

    Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.

  2. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    PubMed

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences.

  3. Segmentation and quantitative evaluation of brain MRI data with a multiphase 3D implicit deformable model

    NASA Astrophysics Data System (ADS)

    Angelini, Elsa D.; Song, Ting; Mensh, Brett D.; Laine, Andrew

    2004-05-01

    Segmentation of three-dimensional anatomical brain images into tissue classes has applications in both clinical and research settings. This paper presents the implementation and quantitative evaluation of a four-phase three-dimensional active contour implemented with a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to speed up numerical computation and avoid the need for a priori information. This random initialization ensures robustness of the method to variation of user expertise, biased a priori information and errors in input information that could be influenced by variations in image quality. Experimentation on three MRI brain data sets showed that an optimal partitioning successfully labeled regions that accurately identified white matter, gray matter and cerebrospinal fluid in the ventricles. Quantitative evaluation of the segmentation was performed with comparison to manually labeled data and computed false positive and false negative assignments of voxels for the three organs. We report high accuracy for the two comparison cases. These results demonstrate the efficiency and flexibility of this segmentation framework to perform the challenging task of automatically extracting brain tissue volume contours.

  4. Rule-based fuzzy vector median filters for 3D phase contrast MRI segmentation

    NASA Astrophysics Data System (ADS)

    Sundareswaran, Kartik S.; Frakes, David H.; Yoganathan, Ajit P.

    2008-02-01

    Recent technological advances have contributed to the advent of phase contrast magnetic resonance imaging (PCMRI) as standard practice in clinical environments. In particular, decreased scan times have made using the modality more feasible. PCMRI is now a common tool for flow quantification, and for more complex vector field analyses that target the early detection of problematic flow conditions. Segmentation is one component of this type of application that can impact the accuracy of the final product dramatically. Vascular segmentation, in general, is a long-standing problem that has received significant attention. Segmentation in the context of PCMRI data, however, has been explored less and can benefit from object-based image processing techniques that incorporate fluids specific information. Here we present a fuzzy rule-based adaptive vector median filtering (FAVMF) algorithm that in combination with active contour modeling facilitates high-quality PCMRI segmentation while mitigating the effects of noise. The FAVMF technique was tested on 111 synthetically generated PC MRI slices and on 15 patients with congenital heart disease. The results were compared to other multi-dimensional filters namely the adaptive vector median filter, the adaptive vector directional filter, and the scalar low pass filter commonly used in PC MRI applications. FAVMF significantly outperformed the standard filtering methods (p < 0.0001). Two conclusions can be drawn from these results: a) Filtering should be performed after vessel segmentation of PC MRI; b) Vector based filtering methods should be used instead of scalar techniques.

  5. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  6. Automated multilayer segmentation and characterization in 3D spectral-domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Wu, Xiaodong; Hariri, Amirhossein; Sadda, SriniVas R.

    2013-03-01

    Spectral-domain optical coherence tomography (SD-OCT) is a 3-D imaging technique, allowing direct visualization of retinal morphology and architecture. The various layers of the retina may be affected differentially by various diseases. In this study, an automated graph-based multilayer approach was developed to sequentially segment eleven retinal surfaces including the inner retinal bands to the outer retinal bands in normal SD-OCT volume scans at three different stages. For stage 1, the four most detectable and/or distinct surfaces were identified in the four-times-downsampled images and were used as a priori positional information to limit the graph search for other surfaces at stage 2. Eleven surfaces were then detected in the two-times-downsampled images at stage 2, and refined in the original image space at stage 3 using the graph search integrating the estimated morphological shape models. Twenty macular SD-OCT (Heidelberg Spectralis) volume scans from 20 normal subjects (one eye per subject) were used in this study. The overall mean and absolute mean differences in border positions between the automated and manual segmentation for all 11 segmented surfaces were -0.20 +/- 0.53 voxels (-0.76 +/- 2.06 μm) and 0.82 +/- 0.64 voxels (3.19 +/- 2.46 μm). Intensity and thickness properties in the resultant retinal layers were investigated. This investigation in normal subjects may provide a comparative reference for subsequent investigations in eyes with disease.

  7. Segmentation of 3D cell membrane images by PDE methods and its applications.

    PubMed

    Mikula, K; Peyriéras, N; Remešíková, M; Stašová, O

    2011-06-01

    We present a set of techniques that enable us to segment objects from 3D cell membrane images. Particularly, we propose methods for detection of approximate cell nuclei centers, extraction of the inner cell boundaries, the surface of the organism and the intercellular borders--the so called intercellular skeleton. All methods are based on numerical solution of partial differential equations. The center detection problem is represented by a level set equation for advective motion in normal direction with curvature term. In case of the inner cell boundaries and the global surface, we use the generalized subjective surface model. The intercellular borders are segmented by the advective level set equation where the velocity field is given by the gradient of the signed distance function to the segmented inner cell boundaries. The distance function is computed by solving the time relaxed eikonal equation. We describe the mathematical models, explain their numerical approximation and finally we present various possible practical applications on the images of zebrafish embryogenesis--computation of important quantitative characteristics, evaluation of the cell shape, detection of cell divisions and others.

  8. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields

    PubMed Central

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674

  9. A segmentation algorithm for noisy images

    SciTech Connect

    Xu, Y.; Olman, V.; Uberbacher, E.C.

    1996-12-31

    This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.

  10. 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation

    NASA Astrophysics Data System (ADS)

    Hu, Qi; Duan, Jin; Zhai, Di; Wang, LiNing

    2016-10-01

    With the continuous development of industrialization, 3D printing technology steps into individuals' lives gradually, however, the consequential security issue has become the urgent problem which is imminent. This paper proposes the 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation and utilizes authorized key to restrict 3D model printing's permissions. Firstly, algorithms put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform and put the transformed coefficient into Fresnel transformation. Use math model to embed watermark information into it and finally generate 3D digital model with watermarking. This paper adopts VC++.NET and DIRECTX 9.0 SDK for combined developing and testing, and the results show that in fixed affine space, achieve the robustness in translation, revolving and proportion transforms of 3D model and better watermark-invisibility. The security and authorization of 3D model have been protected effectively.

  11. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  12. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  13. 3D cerebral MR image segmentation using multiple-classifier system.

    PubMed

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein

    2017-03-01

    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  14. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.

    2014-12-01

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  15. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images.

    PubMed

    Castro-Mateos, Isaac; Pozo, Jose M; Eltes, Peter E; Rio, Luis Del; Lazary, Aron; Frangi, Alejandro F

    2014-12-21

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy.The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  16. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    SciTech Connect

    Guo, Yanrong; Shao, Yeqin; Gao, Yaozong; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-07-15

    patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.

  17. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  18. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and

  19. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  20. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  1. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  2. Segmentation of Textures Defined on Flat vs. Layered Surfaces using Neural Networks: Comparison of 2D vs. 3D Representations.

    PubMed

    Oh, Sejong; Choe, Yoonsuck

    2007-08-01

    Texture boundary detection (or segmentation) is an important capability in human vision. Usually, texture segmentation is viewed as a 2D problem, as the definition of the problem itself assumes a 2D substrate. However, an interesting hypothesis emerges when we ask a question regarding the nature of textures: What are textures, and why did the ability to discriminate texture evolve or develop? A possible answer to this question is that textures naturally define physically distinct (i.e., occluded) surfaces. Hence, we can hypothesize that 2D texture segmentation may be an outgrowth of the ability to discriminate surfaces in 3D. In this paper, we conducted computational experiments with artificial neural networks to investigate the relative difficulty of learning to segment textures defined on flat 2D surfaces vs. those in 3D configurations where the boundaries are defined by occluding surfaces and their change over time due to the observer's motion. It turns out that learning is faster and more accurate in 3D, very much in line with our expectation. Furthermore, our results showed that the neural network's learned ability to segment texture in 3D transfers well into 2D texture segmentation, bolstering our initial hypothesis, and providing insights on the possible developmental origin of 2D texture segmentation function in human vision.

  3. Correlation based 3-D segmentation of the left ventricle in pediatric echocardiographic images using radio-frequency data.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Huisman, H J; Thijssen, Johan M; Kapusta, Livia; de Korte, Chris L

    2011-09-01

    Clinical diagnosis of heart disease might be substantially supported by automated segmentation of the endocardial surface in three-dimensional (3-D) echographic images. Because of the poor echogenicity contrast between blood and myocardial tissue in some regions and the inherent speckle noise, automated analysis of these images is challenging. A priori knowledge on the shape of the heart cannot always be relied on, e.g., in children with congenital heart disease, segmentation should be based on the echo features solely. The objective of this study was to investigate the merit of using temporal cross-correlation of radio-frequency (RF) data for automated segmentation of 3-D echocardiographic images. Maximum temporal cross-correlation (MCC) values were determined locally from the RF-data using an iterative 3-D technique. MCC values as well as a combination of MCC values and adaptive filtered, demodulated RF-data were used as an additional, external force in a deformable model approach to segment the endocardial surface and were tested against manually segmented surfaces. Results on 3-D full volume images (Philips, iE33) of 10 healthy children demonstrate that MCC values derived from the RF signal yield a useful parameter to distinguish between blood and myocardium in regions with low echogenicity contrast and incorporation of MCC improves the segmentation results significantly. Further investigation of the MCC over the whole cardiac cycle is required to exploit the full benefit of it for automated segmentation.

  4. Parallel OSEM Reconstruction Algorithm for Fully 3-D SPECT on a Beowulf Cluster.

    PubMed

    Rong, Zhou; Tianyu, Ma; Yongjie, Jin

    2005-01-01

    In order to improve the computation speed of ordered subset expectation maximization (OSEM) algorithm for fully 3-D single photon emission computed tomography (SPECT) reconstruction, an experimental beowulf-type cluster was built and several parallel reconstruction schemes were described. We implemented a single-program-multiple-data (SPMD) parallel 3-D OSEM reconstruction algorithm based on message passing interface (MPI) and tested it with combinations of different number of calculating processors and different size of voxel grid in reconstruction (64×64×64 and 128×128×128). Performance of parallelization was evaluated in terms of the speedup factor and parallel efficiency. This parallel implementation methodology is expected to be helpful to make fully 3-D OSEM algorithms more feasible in clinical SPECT studies.

  5. An efficient finite-element algorithm for 3D layered complex structure modelling.

    PubMed

    Sahalos, J N; Kyriacou, G A; Vafiadis, E

    1994-05-01

    In this paper an efficient finite-element method (FEM) algorithm for complicated three-dimensional (3D) layered type models has been developed. Its unique feature is that it can handle, with memory requirements within the abilities of a simple PC, arbitrarily shaped 3D elements. This task is achieved by storing only the non-zero coefficients of the sparse FEM system of equations. The algorithm is applied to the solution of the Laplace equation in models with up to 79 layers of trilinear general hexahedron elements. The system of equations is solved with the Gauss-Seidel iterative technique.

  6. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  7. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks

    PubMed Central

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-01-01

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency. PMID:28208735

  8. Generalized total least squares prediction algorithm for universal 3D similarity transformation

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie

    2017-02-01

    Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.

  9. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks.

    PubMed

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-02-08

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.

  10. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.

    PubMed

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

    2013-12-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively.

  11. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  12. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  13. An Algorithm to Identify and Localize Suitable Dock Locations from 3-D LiDAR Scans

    DTIC Science & Technology

    2013-05-10

    3-D) LiDARs have proved themselves very useful on many autonomous ground vehicles, such as the Google Driverless Car Project, the DARPA, Defense...appear in a typical point cloud data set, relative to other clusters such as cars , trees, boulders, etc. In this algorithm, these values were

  14. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  15. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  16. Computer-aided diagnosis of pulmonary nodules on CT scans: segmentation and classification using 3D active contours.

    PubMed

    Way, Ted W; Hadjiiski, Lubomir M; Sahiner, Berkman; Chan, Heang-Ping; Cascade, Philip N; Kazerooni, Ella A; Bogot, Naama; Zhou, Chuan

    2006-07-01

    We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface, (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (A(z)) of 0.83 +/- 0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D

  17. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours

    PubMed Central

    Way, Ted W.; Hadjiiski, Lubomir M.; Sahiner, Berkman; Chan, Heang-Ping; Cascade, Philip N.; Kazerooni, Ella A.; Bogot, Naama; Zhou, Chuan

    2009-01-01

    We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface, (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (Az) of 0.83±0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D AC

  18. Segmentation and tracking of adherens junctions in 3D for the analysis of epithelial tissue morphogenesis.

    PubMed

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-04-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT).

  19. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  20. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    PubMed Central

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  1. Automated torso organ segmentation from 3D CT images using conditional random field

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.

  2. Effective 3D protein structure prediction with local adjustment genetic-annealing algorithm.

    PubMed

    Zhang, Xiao-Long; Lin, Xiao-Li

    2010-09-01

    The protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing the energy function. The protein folding structure prediction is computationally challenging and has been shown to be NP-hard problem when the 3D off-lattice AB model is employed. In this paper, the local adjustment genetic-annealing (LAGA) algorithm was used to search the ground state of 3D offlattice AB model for protein folding structure. The algorithm included an improved crossover strategy and an improved mutation strategy, where a local adjustment strategy was also used to enhance the searching ability. The experiments were carried out with the Fibonacci sequences. The experimental results demonstrate that the LAGA algorithm appears to have better performance and accuracy compared to the previous methods.

  3. Parallel implementation of the FETI-DPEM algorithm for general 3D EM simulations

    NASA Astrophysics Data System (ADS)

    Li, Yu-Jia; Jin, Jian-Ming

    2009-05-01

    A parallel implementation of the electromagnetic dual-primal finite element tearing and interconnecting algorithm (FETI-DPEM) is designed for general three-dimensional (3D) electromagnetic large-scale simulations. As a domain decomposition implementation of the finite element method, the FETI-DPEM algorithm provides fully decoupled subdomain problems and an excellent numerical scalability, and thus is well suited for parallel computation. The parallel implementation of the FETI-DPEM algorithm on a distributed-memory system using the message passing interface (MPI) is discussed in detail along with a few practical guidelines obtained from numerical experiments. Numerical examples are provided to demonstrate the efficiency of the parallel implementation.

  4. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT.

    PubMed

    Markel, Daniel; Caldwell, Curtis; Alasti, Hamideh; Soliman, Hany; Ung, Yee; Lee, Justin; Sun, Alexander

    2013-01-01

    Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT) and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET). First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT) with K-nearest neighbours (KNN) classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians.

  5. Structural Stereo Matching Of Laplacian-Of-Gaussian Contour Segments For 3D Perception

    NASA Astrophysics Data System (ADS)

    Boyer, K. L.; Sotak, G. E.

    1989-03-01

    We solve the stereo correspondence problem using Lapla-cian of Gaussian (LoG) zero-crossing contours as a source of primitives for structural stereopsis, as opposed to traditional point-based algorithms. For each image in the stereo pair, we apply the LoG operator, extract and link zero crossing points, filter and segment the contours into meaningful primitives, and compute a parametric structural description over the resulting primitive set. We then apply a variant of the inexact structural matching technique of Boyer and Kak Ill to recover the optimal interprimitive mapping (correspon-dence) function. Since an extended image feature conveys more information than a single point, its spatial and photometric behavior may be exploited to advantage; there are also fewer features to match, resulting in a smaller combinatorial problem. The structural approach allows greater use of spatial relational constraints, which allows us to eliminate (or reduce) the coarse-to-fine tracking of most point-based algorithms. Solving the correspondence problem at this level requires only an approximate probabilistic characterization of the image-to-image structural distortion, and does not require detailed knowledge of the epipolar geometry.

  6. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT

    PubMed Central

    Markel, Daniel; Caldwell, Curtis; Alasti, Hamideh; Soliman, Hany; Ung, Yee; Lee, Justin; Sun, Alexander

    2013-01-01

    Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT) and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET). First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT) with K-nearest neighbours (KNN) classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians. PMID:23533750

  7. Left-Atrial Segmentation From 3-D Ultrasound Using B-Spline Explicit Active Surfaces With Scale Uncoupling.

    PubMed

    Almeida, Nuno; Friboulet, Denis; Sarvari, Sebastian Imre; Bernard, Olivier; Barbosa, Daniel; Samset, Eigil; Dhooge, Jan

    2016-02-01

    Segmentation of the left atrium (LA) of the heart allows quantification of LA volume dynamics which can give insight into cardiac function. However, very little attention has been given to LA segmentation from three-dimensional (3-D) ultrasound (US), most efforts being focused on the segmentation of the left ventricle (LV). The B-spline explicit active surfaces (BEAS) framework has been shown to be a very robust and efficient methodology to perform LV segmentation. In this study, we propose an extension of the BEAS framework, introducing B-splines with uncoupled scaling. This formulation improves the shape support for less regular and more variable structures, by giving independent control over smoothness and number of control points. Semiautomatic segmentation of the LA endocardium using this framework was tested in a setup requiring little user input, on 20 volumetric sequences of echocardiographic data from healthy subjects. The segmentation results were evaluated against manual reference delineations of the LA. Relevant LA morphological and functional parameters were derived from the segmented surfaces, in order to assess the performance of the proposed method on its clinical usage. The results showed that the modified BEAS framework is capable of accurate semiautomatic LA segmentation in 3-D transthoracic US, providing reliable quantification of the LA morphology and function.

  8. Effect of segmentation errors on 3D-to-2D registration of implant models in X-ray images.

    PubMed

    Mahfouz, Mohamed R; Hoff, William A; Komistek, Richard D; Dennis, Douglas A

    2005-02-01

    In many biomedical applications, it is desirable to estimate the three-dimensional (3D) position and orientation (pose) of a metallic rigid object (such as a knee or hip implant) from its projection in a two-dimensional (2D) X-ray image. If the geometry of the object is known, as well as the details of the image formation process, then the pose of the object with respect to the sensor can be determined. A common method for 3D-to-2D registration is to first segment the silhouette contour from the X-ray image; that is, identify all points in the image that belong to the 2D silhouette and not to the background. This segmentation step is then followed by a search for the 3D pose that will best match the observed contour with a predicted contour. Although the silhouette of a metallic object is often clearly visible in an X-ray image, adjacent tissue and occlusions can make the exact location of the silhouette contour difficult to determine in places. Occlusion can occur when another object (such as another implant component) partially blocks the view of the object of interest. In this paper, we argue that common methods for segmentation can produce errors in the location of the 2D contour, and hence errors in the resulting 3D estimate of the pose. We show, on a typical fluoroscopy image of a knee implant component, that interactive and automatic methods for segmentation result in segmented contours that vary significantly. We show how the variability in the 2D contours (quantified by two different metrics) corresponds to variability in the 3D poses. Finally, we illustrate how traditional segmentation methods can fail completely in the (not uncommon) cases of images with occlusion.

  9. A novel 3D graph cut based co-segmentation of lung tumor on PET-CT images with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Yu, Kai; Chen, Xinjian; Shi, Fei; Zhu, Weifang; Zhang, Bin; Xiang, Dehui

    2016-03-01

    Positron Emission Tomography (PET) and Computed Tomography (CT) have been widely used in clinical practice for radiation therapy. Most existing methods only used one image modality, either PET or CT, which suffers from the low spatial resolution in PET or low contrast in CT. In this paper, a novel 3D graph cut method is proposed, which integrated Gaussian Mixture Models (GMMs) into the graph cut method. We also employed the random walk method as an initialization step to provide object seeds for the improvement of the graph cut based segmentation on PET and CT images. The constructed graph consists of two sub-graphs and a special link between the sub-graphs which penalize the difference segmentation between the two modalities. Finally, the segmentation problem is solved by the max-flow/min-cut method. The proposed method was tested on 20 patients' PET-CT images, and the experimental results demonstrated the accuracy and efficiency of the proposed algorithm.

  10. Applying 3D measurements and computer matching algorithms to two firearm examination proficiency tests.

    PubMed

    Ott, Daniel; Thompson, Robert; Song, Junfeng

    2017-02-01

    In order for a crime laboratory to assess a firearms examiner's training, skills, experience, and aptitude, it is necessary for the examiner to participate in proficiency testing. As computer algorithms for comparisons of pattern evidence become more prevalent, it is of interest to test algorithm performance as well, using these same proficiency examinations. This article demonstrates the use of the Congruent Matching Cell (CMC) algorithm to compare 3D topography measurements of breech face impressions and firing pin impressions from a previously distributed firearms proficiency test. In addition, the algorithm is used to analyze the distribution of many comparisons from a collection of cartridge cases used to construct another recent set of proficiency tests. These results are provided along with visualizations that help to relate the features used in optical comparisons by examiners to the features used by computer comparison algorithms.

  11. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  12. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation

    PubMed Central

    Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung

    2015-01-01

    In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions. PMID:26295395

  13. Fast and memory-efficient LOGISMOS graph search for intraretinal layer segmentation of 3D macular OCT scans

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Zhang, Li; Abramoff, Michael D.; Sonka, Milan

    2015-03-01

    Image segmentation is important for quantitative analysis of medical image data. Recently, our research group has introduced a 3-D graph search method which can simultaneously segment optimal interacting surfaces with respect to the cost function in volumetric images. Although it provides excellent segmentation accuracy, it is computationally demanding (both CPU and memory) to simultaneously segment multiple surfaces from large volumetric images. Therefore, we propose a new, fast, and memory-efficient graph search method for intraretinal layer segmentation of 3-D macular optical coherence tomograpy (OCT) scans. The key idea is to reduce the size of a graph by combining the nodes with high costs based on the multiscale approach. The new approach requires significantly less memory and achieves significantly faster processing speeds (p < 0.01) with only small segmentation differences compared to the original graph search method. This paper discusses sub-optimality of this approach and assesses trade-off relationships between decreasing processing speed and increasing segmentation differences from that of the original method as a function of employed scale of the underlying graph construction.

  14. Joint inversions of two VTEM surveys using quasi-3D TDEM and 3D magnetic inversion algorithms

    NASA Astrophysics Data System (ADS)

    Kaminski, Vlad; Di Massa, Domenico; Viezzoli, Andrea

    2016-05-01

    In the current paper, we present results of a joint quasi-three-dimensional (quasi-3D) inversion of two versatile time domain electromagnetic (VTEM) datasets, as well as a joint 3D inversion of associated aeromagnetic datasets, from two surveys flown six years apart from one another (2007 and 2013) over a volcanogenic massive sulphide gold (VMS-Au) prospect in northern Ontario, Canada. The time domain electromagnetic (TDEM) data were inverted jointly using the spatially constrained inversion (SCI) approach. In order to increase the coherency in the model space, a calibration parameter was added. This was followed by a joint inversion of the total magnetic intensity (TMI) data extracted from the two surveys. The results of the inversions have been studied and matched with the known geology, adding some new valuable information to the ongoing mineral exploration initiative.

  15. Model-based segmentation and quantification of subcellular structures in 2D and 3D fluorescent microscopy images

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Heinzer, Stephan; Weiss, Matthias; Rohr, Karl

    2008-03-01

    We introduce a model-based approach for segmenting and quantifying GFP-tagged subcellular structures of the Golgi apparatus in 2D and 3D microscopy images. The approach is based on 2D and 3D intensity models, which are directly fitted to an image within 2D circular or 3D spherical regions-of-interest (ROIs). We also propose automatic approaches for the detection of candidates, for the initialization of the model parameters, and for adapting the size of the ROI used for model fitting. Based on the fitting results, we determine statistical information about the spatial distribution and the total amount of intensity (fluorescence) of the subcellular structures. We demonstrate the applicability of our new approach based on 2D and 3D microscopy images.

  16. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  17. Shape analysis of corpus callosum in phenylketonuria using a new 3D correspondence algorithm

    NASA Astrophysics Data System (ADS)

    He, Qing; Christ, Shawn E.; Karsch, Kevin; Peck, Dawn; Duan, Ye

    2010-03-01

    Statistical shape analysis of brain structures has gained increasing interest from neuroimaging community because it can precisely locate shape differences between healthy and pathological structures. The most difficult and crucial problem is establishing shape correspondence among individual 3D shapes. This paper proposes a new algorithm for 3D shape correspondence. A set of landmarks are sampled on a template shape, and initial correspondence is established between the template and the target shape based on the similarity of locations and normal directions. The landmarks on the target are then refined by iterative thin plate spline. The algorithm is simple and fast, and no spherical mapping is needed. We apply our method to the statistical shape analysis of the corpus callosum (CC) in phenylketonuria (PKU), and significant local shape differences between the patients and the controls are found in the most anterior and posterior aspects of the corpus callosum.

  18. 3D finite-difference modeling algorithm and anomaly features of ZTEM

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Tan, Han-Dong; Li, Zhi-Qiang; Wang, Kun-Peng; Hu, Zhi-Ming; Zhang, Xing-Dong

    2016-09-01

    The Z-Axis tipper electromagnetic (ZTEM) technique is based on a frequency-domain airborne electromagnetic system that measures the natural magnetic field. A survey area was divided into several blocks by using the Maxwell's equations, and the magnetic components at the center of each edge of the grid cell are evaluated by applying the staggered-grid finite-difference method. The tipper and its divergence are derived to complete the 3D ZTEM forward modeling algorithm. A synthetic model is then used to compare the responses with those of 2D finite-element forward modeling to verify the accuracy of the algorithm. ZTEM offers high horizontal resolution to both simple and complex distributions of conductivity. This work is the theoretical foundation for the interpretation of ZTEM data and the study of 3D ZTEM inversion.

  19. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  20. Algorithms for extraction of structural attitudes from 3D outcrop models

    NASA Astrophysics Data System (ADS)

    Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos

    2016-05-01

    The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.

  1. Respiratory motion correction in 3-D PET data with advanced optical flow algorithms.

    PubMed

    Dawood, Mohammad; Buther, Florian; Jiang, Xiaoyi; Schafers, Klaus P

    2008-08-01

    The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.

  2. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    NASA Astrophysics Data System (ADS)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  3. Automated 3D Segmentation of Intraretinal Surfaces in SD-OCT Volumes in Normal and Diabetic Mice

    PubMed Central

    Antony, Bhavna J.; Jeong, Woojin; Abràmoff, Michael D.; Vance, Joseph; Sohn, Elliott H.; Garvin, Mona K.

    2014-01-01

    Purpose To describe an adaptation of an existing graph-theoretic method (initially developed for human optical coherence tomography [OCT] images) for the three-dimensional (3D) automated segmentation of 10 intraretinal surfaces in mice scans, and assess the accuracy of the method and the reproducibility of thickness measurements. Methods Ten intraretinal surfaces were segmented in repeat spectral domain (SD)-OCT volumetric images acquired from normal (n = 8) and diabetic (n = 10) mice. The accuracy of the method was assessed by computing the border position errors of the automated segmentation with respect to manual tracings obtained from two experts. The reproducibility was statistically assessed for four retinal layers within eight predefined regions using the mean and SD of the differences in retinal thickness measured in the repeat scans, the coefficient of variation (CV) and the intraclass correlation coefficients (ICC; with 95% confidence intervals [CIs]). Results The overall mean unsigned border position error for the 10 surfaces computed over 97 B-scans (10 scans, 10 normal mice) was 3.16 ± 0.91 μm. The overall mean differences in retinal thicknesses computed from the normal and diabetic mice were 1.86 ± 0.95 and 2.15 ± 0.86 μm, respectively. The CV of the retinal thicknesses for all the measured layers ranged from 1.04% to 5%. The ICCs for the total retinal thickness in the normal and diabetic mice were 0.78 [0.10, 0.92] and 0.83 [0.31, 0.96], respectively. Conclusion The presented method (publicly available as part of the Iowa Reference Algorithms) has acceptable accuracy and reproducibility and is expected to be useful in the quantitative study of intraretinal layers in mice. Translational Relevance The presented method, initially developed for human OCT, has been adapted for mice, with the potential to be adapted for other animals as well. Quantitative in vivo assessment of the retina in mice allows changes to be measured longitudinally, decreasing

  4. A hierarchical 3D segmentation method and the definition of vertebral body coordinate systems for QCT of the lumbar spine.

    PubMed

    Mastmeyer, André; Engelke, Klaus; Fuchs, Christina; Kalender, Willi A

    2006-08-01

    We have developed a new hierarchical 3D technique to segment the vertebral bodies in order to measure bone mineral density (BMD) with high trueness and precision in volumetric CT datasets. The hierarchical approach starts with a coarse separation of the individual vertebrae, applies a variety of techniques to segment the vertebral bodies with increasing detail and ends with the definition of an anatomic coordinate system for each vertebral body, relative to which up to 41 trabecular and cortical volumes of interest are positioned. In a pre-segmentation step constraints consisting of Boolean combinations of simple geometric shapes are determined that enclose each individual vertebral body. Bound by these constraints viscous deformable models are used to segment the main shape of the vertebral bodies. Volume growing and morphological operations then capture the fine details of the bone-soft tissue interface. In the volumes of interest bone mineral density and content are determined. In addition, in the segmented vertebral bodies geometric parameters such as volume or the length of the main axes of inertia can be measured. Intra- and inter-operator precision errors of the segmentation procedure were analyzed using existing clinical patient datasets. Results for segmented volume, BMD, and coordinate system position were below 2.0%, 0.6%, and 0.7%, respectively. Trueness was analyzed using phantom scans. The bias of the segmented volume was below 4%; for BMD it was below 1.5%. The long-term goal of this work is improved fracture prediction and patient monitoring in the field of osteoporosis. A true 3D segmentation also enables an accurate measurement of geometrical parameters that may augment the clinical value of a pure BMD analysis.

  5. Dynamic 3D MR Visualization and Detection of Upper Airway Obstruction during Sleep using Region Growing Segmentation

    PubMed Central

    Kim, Yoon-Chul; Khoo, Michael C.K.; Davidson Ward, Sally L.; Nayak, Krishna S.

    2016-01-01

    Goal We demonstrate a novel and robust approach for visualization of upper airway dynamics and detection of obstructive events from dynamic 3D magnetic resonance imaging (MRI) scans of the pharyngeal airway. Methods This approach uses 3D region growing, where the operator selects a region of interest that includes the pharyngeal airway, places two seeds in the patent airway, and determines a threshold for the first frame. Results This approach required 5 sec/frame of CPU time compared to 10 min/frame of operator time for manual segmentation. It compared well with manual segmentation, resulting in Dice Coefficients of 0.84 to 0.94, whereas the Dice Coefficients for two manual segmentations by the same observer were 0.89 to 0.97. It was also able to automatically detect 83% of collapse events. Conclusion Use of this simple semi-automated segmentation approach improves the workflow of novel dynamic MRI studies of the pharyngeal airway and enables visualization and detection of obstructive events. Significance Obstructive sleep apnea is a significant public health issue affecting 4-9% of adults and 2% of children. Recently, 3D dynamic MRI of the upper airway has been demonstrated during natural sleep, with sufficient spatio-temporal resolution to non-invasively study patterns of airway obstruction in young adults with OSA. This work makes it practical to analyze these long scans and visualize important factors in an MRI sleep study, such as the time, site, and extent of airway collapse. PMID:26258929

  6. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  7. A Survey of Digital Image Segmentation Algorithms

    DTIC Science & Technology

    1995-01-01

    features. Thresholding techniques arc also useful in segmenting such binary images as printed documents, line drawings, and multispectral and x-ray...algorithms, pixel labeling and run-length connectivity analysis, arc discussed in the following sections. Therefore, in exammmg g(x, y), pixels that are...edge linking, graph searching, curve fitting, Hough transform, and others arc applicablc to image segmematio~. Difficulties with boundary-based methods

  8. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  9. Semi-automated 3D segmentation of major tracts in the rat brain: comparing DTI with standard histological methods.

    PubMed

    Gyengesi, Erika; Calabrese, Evan; Sherrier, Matthew C; Johnson, G Allan; Paxinos, George; Watson, Charles

    2014-03-01

    Researchers working with rodent models of neurological disease often require an accurate map of the anatomical organization of the white matter of the rodent brain. With the increasing popularity of small animal MRI techniques, including diffusion tensor imaging (DTI), there is considerable interest in rapid segmentation methods of neurological structures for quantitative comparisons. DTI-derived tractography allows simple and rapid segmentation of major white matter tracts, but the anatomic accuracy of these computer-generated fibers is open to question and has not been rigorously evaluated in the rat brain. In this study, we examine the anatomic accuracy of tractography-based segmentation in the adult rat brain. We analysed 12 major white matter pathways using semi-automated tractography-based segmentation alongside manual segmentation of Gallyas silver-stained histology sections. We applied four fiber-tracking algorithms to the DTI data-two integration methods and two deflection methods. In many cases, tractography-based segmentation closely matched histology-based segmentation; however different tractography algorithms produced dramatically different results. Results suggest that certain white matter pathways are more amenable to tractography-based segmentation than others. We believe that these data will help researchers decide whether it is appropriate to use tractography-based segmentation of white matter structures for quantitative DTI-based analysis of neurologic disease models.

  10. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony.

  11. An Automatic 3D Facial Landmarking Algorithm Using 2D Gabor Wavelets.

    PubMed

    de Jong, Markus A; Wollstein, Andreas; Ruff, Clifford; Dunaway, David; Hysi, Pirro; Spector, Tim; Fan Liu; Niessen, Wiro; Koudstaal, Maarten J; Kayser, Manfred; Wolvius, Eppo B; Bohringer, Stefan

    2016-02-01

    In this paper, we present a novel approach to automatic 3D facial landmarking using 2D Gabor wavelets. Our algorithm considers the face to be a surface and uses map projections to derive 2D features from raw data. Extracted features include texture, relief map, and transformations thereof. We extend an established 2D landmarking method for simultaneous evaluation of these data. The method is validated by performing landmarking experiments on two data sets using 21 landmarks and compared with an active shape model implementation. On average, landmarking error for our method was 1.9 mm, whereas the active shape model resulted in an average landmarking error of 2.3 mm. A second study investigating facial shape heritability in related individuals concludes that automatic landmarking is on par with manual landmarking for some landmarks. Our algorithm can be trained in 30 min to automatically landmark 3D facial data sets of any size, and allows for fast and robust landmarking of 3D faces.

  12. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  13. Manifold learning for shape guided segmentation of cardiac boundaries: application to 3D+t cardiac MRI.

    PubMed

    Eslami, Abouzar; Yigitsoy, Mehmet; Navab, Nassir

    2011-01-01

    In this paper we propose a new method for shape guided segmentation of cardiac boundaries based on manifold learning of the shapes represented by the phase field approximation of the Mumford-Shah functional. A novel distance is defined to measure the similarity of shapes without requiring deformable registration. Cardiac motion is compensated and phases are mapped into one reference phase, that is the end of diastole, to avoid time warping and synchronization at all cardiac phases. Non-linear embedding of these 3D shapes extracts the manifold of the inter-subject variation of the heart shape to be used for guiding the segmentation for a new subject. For validation the method is applied to a comprehensive dataset of 3D+t cardiac Cine MRI from normal subjects and patients.

  14. Computer-aided segmentation and 3D analysis of in vivo MRI examinations of the human vocal tract during phonation

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Behrends, Johannes; Hoole, Phil; Leinsinger, Gerda L.; Meyer-Baese, Anke; Reiser, Maximilian F.

    2008-03-01

    We developed, tested, and evaluated a 3D segmentation and analysis system for in vivo MRI examinations of the human vocal tract during phonation. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (>=21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/ and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p<0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

  15. Systolic and diastolic assessment by 3D-ASM segmentation of gated-SPECT Studies: a comparison with MRI

    NASA Astrophysics Data System (ADS)

    Tobon-Gomez, C.; Bijnens, B. H.; Huguet, M.; Sukno, F.; Moragas, G.; Frangi, A. F.

    2009-02-01

    Gated single photon emission tomography (gSPECT) is a well-established technique used routinely in clinical practice. It can be employed to evaluate global left ventricular (LV) function of a patient. The purpose of this study is to assess LV systolic and diastolic function from gSPECT datasets in comparison with cardiac magnetic resonance imaging (CMR) measurements. This is achieved by applying our recently implemented 3D active shape model (3D-ASM) segmentation approach for gSPECT studies. This methodology allows for generation of 3D LV meshes for all cardiac phases, providing volume time curves and filling rate curves. Both systolic and diastolic functional parameters can be derived from these curves for an assessment of patient condition even at early stages of LV dysfunction. Agreement of functional parameters, with respect to CMR measurements, were analyzed by means of Bland-Altman plots. The analysis included subjects presenting either LV hypertrophy, dilation or myocardial infarction.

  16. 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has

  17. An improved scheduling algorithm for 3D cluster rendering with platform LSF

    NASA Astrophysics Data System (ADS)

    Xu, Wenli; Zhu, Yi; Zhang, Liping

    2013-10-01

    High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.

  18. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    PubMed Central

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-01-01

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper’s ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as β exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  19. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  20. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  1. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images.

    PubMed

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M; Fei, Baowei

    2016-02-27

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  2. Combining population and patient-specific characteristics for prostate segmentation on 3D CT images

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-03-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  3. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images

    PubMed Central

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-01-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy. PMID:27660382

  4. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  5. An accurate multimodal 3-D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis.

    PubMed

    Kafieh, Raheleh; Rabbani, Hossein; Hajizadeh, Fedra; Ommani, Mohammadreza

    2013-10-01

    This paper proposes a multimodal approach for vessel segmentation of macular optical coherence tomography (OCT) slices along with the fundus image. The method is comprised of two separate stages; the first step is 2-D segmentation of blood vessels in curvelet domain, enhanced by taking advantage of vessel information in crossing OCT slices (named feedback procedure), and improved by suppressing the false positives around the optic nerve head. The proposed method for vessel localization of OCT slices is also enhanced utilizing the fact that retinal nerve fiber layer becomes thicker in the presence of the blood vessels. The second stage of this method is axial localization of the vessels in OCT slices and 3-D reconstruction of the blood vessels. Twenty-four macular spectral 3-D OCT scans of 16 normal subjects were acquired using a Heidelberg HRA OCT scanner. Each dataset consisted of a scanning laser ophthalmoscopy (SLO) image and limited number of OCT scans with size of 496 × 512 (namely, for a data with 19 selected OCT slices, the whole data size was 496 × 512 × 19). The method is developed with least complicated algorithms and the results show considerable improvement in accuracy of vessel segmentation over similar methods to produce a local accuracy of 0.9632 in area of SLO, covered with OCT slices, and the overall accuracy of 0.9467 in the whole SLO image. The results are also demonstrative of a direct relation between the overall accuracy and percentage of SLO coverage by OCT slices.

  6. A novel iterative computation algorithm for Kinoform of 3D object

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-yu; Chuang, Pei; Wang, Xi; Zong, Yantao

    2012-11-01

    A novel method for computing kinoform of 3D object based on traditional iterate Fourier transform algorithm(IFTA) is proposed in this paper. Kinoform is a special kind of computer-generated holograms (CGH) which has very high diffraction efficiency since it only modulates the phase of illuminated light and doesn't have cross-interference from conjugate image. The traditional IFTA arithmetic assumes that reconstruction image is in infinity area(Fraunhofer diffraction region), and ignores the deepness of 3D object ,so it can only calculate two-dimensional kinoform. The proposed algorithm in this paper divides three-dimensional object into several object planes in deepness and treat every object plane as a target image then iterate computation is carried out between one input plane(kinoform) and multi-output planes(reconstruction images) .A space phase factor is added into iterate process to represent depth characters of 3D object, then reconstruction images is in Fresnel diffraction region. Optics reconstructed experiment of kinoform computed by this method is realized based on Liquid Crystals on Silicon (LCoS) Spatial Light Modulator(SLM). Mean Square Error(MSE) and Structure Similarity(SSIM) between original and reconstruction image is used to evaluate this method. The experimental result shows that this algorithm speed is fast and the result kinoform can reconstruct the object in different plane with high precision under the illumination of plane wave. The reconstruction images provide space sense of three-dimensional visual effect. At last, the influence of space and shelter between different object planes to reconstruction image is also discussed in the experiment.

  7. a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.

    2012-07-01

    Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential

  8. 3D Radiative Transfer in Eta Carinae: Application of the SimpleX Algorithm to 3D SPH Simulations of Binary Colliding Winds

    NASA Technical Reports Server (NTRS)

    Clementel, N.; Madura, T. I.; Kruip, C.J.H.; Icke, V.; Gull, T. R.

    2014-01-01

    Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in eta Car.We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form.We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for eta Car.

  9. 3D Radiative Transfer in Eta Carinae: Application of the SimpleX Algorithm to 3D SPH Simulations of Binary Colliding Winds

    NASA Technical Reports Server (NTRS)

    Clementel, N.; Madura, T. I.; Kruip, C. J. H.; Icke, V.; Gull, T. R.

    2014-01-01

    Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in Eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in Eta Car. We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form. We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for Eta Car.

  10. Data-driven interactive 3D medical image segmentation based on structured patch model.

    PubMed

    Park, Sang Hyun; Yun, Il Dong; Lee, Sang Uk

    2013-01-01

    In this paper, we present a novel three dimensional interactive medical image segmentation method based on high level knowledge of training set. Since the interactive system should provide intermediate results to an user quickly, insufficient low level models are used for most of previous methods. To exploit the high level knowledge within a short time, we construct a structured patch model that consists of multiple corresponding patch sets. The structured patch model includes the spatial relationships between neighboring patch sets and the prior knowledge of the corresponding patch set on each local region. The spatial relationships accelerate the search of corresponding patch in test time, while the prior knowledge improves the segmentation accuracy. The proposed framework provides not only fast editing tool, but the incremental learning system through adding the segmentation result to the training set. Experiments demonstrate that the proposed method is useful for fast and accurate segmentation of target objects from the multiple medical images.

  11. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  12. Fully automatic cardiac segmentation from 3D CTA data: a multi-atlas based approach

    NASA Astrophysics Data System (ADS)

    Kirisli, Hortense A.; Schaap, Michiel; Klein, Stefan; Neefjes, Lisan A.; Weustink, Annick C.; Van Walsum, Theo; Niessen, Wiro J.

    2010-03-01

    Computed tomography angiography (CTA), a non-invasive imaging technique, is becoming increasingly popular for cardiac examination, mainly due to its superior spatial resolution compared to MRI. This imaging modality is currently widely used for the diagnosis of coronary artery disease (CAD) but it is not commonly used for the diagnosis of ventricular and atrial function. In this paper, we present a fully automatic method for segmenting the whole heart (i.e. the outer surface of the myocardium) and cardiac chambers from CTA datasets. Cardiac chamber segmentation is particularly valuable for the extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction. With our approach, we aim to improve the diagnosis of CAD by providing functional information extracted from the same CTA data, thus not requiring additional scanning. In addition, the whole heart segmentation method we propose can be used for visualization of the coronary arteries and for obtaining a region of interest for subsequent segmentation of the coronaries, ventricles and atria. Our approach is based on multi-atlas segmentation, and performed within a non-rigid registration framework. A leave-one-out quantitative validation was carried out on 8 images. The method showed a high accuracy, which is reflected in both a mean segmentation error of 1.05+/-1.30 mm and an average Dice coefficient of 0.93. The robustness of the method is demonstrated by successfully applying the method to 243 additional datasets, without any significant failure.

  13. Searching protein 3-D structures for optimal structure alignment using intelligent algorithms and data structures.

    PubMed

    Novosád, Tomáš; Snášel, Václav; Abraham, Ajith; Yang, Jack Y

    2010-11-01

    In this paper, we present a novel algorithm for measuring protein similarity based on their 3-D structure (protein tertiary structure). The algorithm used a suffix tree for discovering common parts of main chains of all proteins appearing in the current research collaboratory for structural bioinformatics protein data bank (PDB). By identifying these common parts, we build a vector model and use some classical information retrieval (IR) algorithms based on the vector model to measure the similarity between proteins--all to all protein similarity. For the calculation of protein similarity, we use term frequency × inverse document frequency ( tf × idf ) term weighing schema and cosine similarity measure. The goal of this paper is to introduce new protein similarity metric based on suffix trees and IR methods. Whole current PDB database was used to demonstrate very good time complexity of the algorithm as well as high precision. We have chosen the structural classification of proteins (SCOP) database for verification of the precision of our algorithm because it is maintained primarily by humans. The next success of this paper would be the ability to determine SCOP categories of proteins not included in the latest version of the SCOP database (v. 1.75) with nearly 100% precision.

  14. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  15. 3D-FIESTA Magnetic Resonance Angiography Fusion Imaging of Distal Segment of Occluded Middle Cerebral Artery.

    PubMed

    Kuribara, Tomoyoshi; Haraguchi, Koichi; Ogane, Kazumi; Matsuura, Nobuki; Ito, Takeo

    2015-01-01

    Middle cerebral artery (MCA) occlusion was examined with basi-parallel anatomical scanning (BPAS) using three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA), and 3D-FIESTA and magnetic resonance angiography (MRA) fusion images were created. We expected that an incidence of hemorrhagic complications due to vessel perforations would be decreased by obtaining vascular information beyond the occlusion and thus acute endovascular revascularization could be performed using such techniques. We performed revascularization for acute MCA occlusion for five patients who were admitted in our hospital from October 2012 to October 2014. Patients consisted of 1 man and 4 women with a mean age of 76.2 years (range: 59-86 years). Fusion images were created from three-dimensional time of flight (3D-TOF) MRA and 3D-FIESTA with phase cycling (3D-FIESTA-C). Then thrombectomy was performed in all the 5 patients. Merci retriever to 1 patient, Penumbra system to 1, urokinase infusion to 2, and Solitaire to 1 using such techniques. In all cases, a 3D-FIESTA-MRA fusion imaging could depict approximately clear vascular information to at least the M3 segment beyond the occlusion. And each acute revascularization was able to perform smoothly using these imaging techniques. In all cases, there was no symptomatic hemorrhagic complication. It showed that 3D-FIESTA MRA fusion imaging technique could obtain vascular information beyond the MCA occlusion. In this study, no symptomatic hemorrhagic complications were detected. It could imply that such techniques were useful not only to improve treatment efficiency but also to reduce the risk of development of hemorrhagic complications caused by vessel perforations in acute revascularization.

  16. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  17. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  18. Refinement-cut: user-guided segmentation algorithm for translational science.

    PubMed

    Egger, Jan

    2014-06-04

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  19. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    PubMed Central

    Egger, Jan

    2014-01-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650

  20. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    NASA Astrophysics Data System (ADS)

    Egger, Jan

    2014-06-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  1. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator

    NASA Astrophysics Data System (ADS)

    Karaszewski, M.; Adamczyk, M.; Sitnik, R.

    2016-09-01

    The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.

  2. Mesenteric Vasculature-guided Small Bowel Segmentation on 3D CT

    PubMed Central

    Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Louie, Adeline; Nguyen, Tan B.; Wank, Stephen; Nowinski, Wieslaw L.; Summers, Ronald M.

    2014-01-01

    Due to its importance and possible applications in visualization, tumor detection and pre-operative planning, automatic small bowel segmentation is essential for computer-aided diagnosis of small bowel pathology. However, segmenting the small bowel directly on CT scans is very difficult because of the low image contrast on CT scans and high tortuosity of the small bowel and its close proximity to other abdominal organs. Motivated by the intensity characteristics of abdominal CT images, the anatomic relationship between the mesenteric vasculature and the small bowel, and potential usefulness of the mesenteric vasculature for establishing the path of the small bowel, we propose a novel mesenteric vasculature map-guided method for small bowel segmentation on high-resolution CT angiography scans. The major mesenteric arteries are first segmented using a vessel tracing method based on multi-linear subspace vessel model and Bayesian inference. Second, multi-view, multi-scale vesselness enhancement filters are used to segment small vessels, and vessels directly or indirectly connecting to the superior mesenteric artery are classified as mesenteric vessels. Third, a mesenteric vasculature map is built by linking vessel bifurcation points, and the small bowel is segmented by employing the mesenteric vessel map and fuzzy connectness. The method was evaluated on 11 abdominal CT scans of patients suspected of having carcinoid tumors with manually labeled reference standard. The result, 82.5% volume overlap accuracy compared with the reference standard, shows it is feasible to segment the small bowel on CT scans using the mesenteric vasculature as a roadmap. PMID:23807437

  3. Enhanced high dynamic range 3D shape measurement based on generalized phase-shifting algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Minmin; Du, Guangliang; Zhou, Canlin; Zhang, Chaorui; Si, Shuchun; Li, Hui; Lei, Zhenkun; Li, YanJie

    2017-02-01

    Measuring objects with large reflectivity variations across their surface is one of the open challenges in phase measurement profilometry (PMP). Saturated or dark pixels in the deformed fringe patterns captured by the camera will lead to phase fluctuations and errors. Jiang et al. proposed a high dynamic range real-time three-dimensional (3D) shape measurement method (Jiang et al., 2016) [17] that does not require changing camera exposures. Three inverted phase-shifted fringe patterns are used to complement three regular phase-shifted fringe patterns for phase retrieval whenever any of the regular fringe patterns are saturated. Nonetheless, Jiang's method has some drawbacks: (1) the phases of saturated pixels are estimated by different formulas on a case by case basis; in other words, the method lacks a universal formula; (2) it cannot be extended to the four-step phase-shifting algorithm, because inverted fringe patterns are the repetition of regular fringe patterns; (3) for every pixel in the fringe patterns, only three unsaturated intensity values can be chosen for phase demodulation, leaving the other unsaturated ones idle. We propose a method to enhance high dynamic range 3D shape measurement based on a generalized phase-shifting algorithm, which combines the complementary techniques of inverted and regular fringe patterns with a generalized phase-shifting algorithm. Firstly, two sets of complementary phase-shifted fringe patterns, namely the regular and the inverted fringe patterns, are projected and collected. Then, all unsaturated intensity values at the same camera pixel from two sets of fringe patterns are selected and employed to retrieve the phase using a generalized phase-shifting algorithm. Finally, simulations and experiments are conducted to prove the validity of the proposed method. The results are analyzed and compared with those of Jiang's method, demonstrating that our method not only expands the scope of Jiang's method, but also improves

  4. Segment-interaction in sprint start: Analysis of 3D angular velocity and kinetic energy in elite sprinters.

    PubMed

    Slawinski, J; Bonnefoy, A; Ontanon, G; Leveque, J M; Miller, C; Riquet, A; Chèze, L; Dumas, R

    2010-05-28

    The aim of the present study was to measure during a sprint start the joint angular velocity and the kinetic energy of the different segments in elite sprinters. This was performed using a 3D kinematic analysis of the whole body. Eight elite sprinters (10.30+/-0.14s 100 m time), equipped with 63 passive reflective markers, realised four maximal 10 m sprints start on an indoor track. An opto-electronic Motion Analysis system consisting of 12 digital cameras (250 Hz) was used to collect the 3D marker trajectories. During the pushing phase on the blocks, the 3D angular velocity vector and its norm were calculated for each joint. The kinetic energy of 16 segments of the lower and upper limbs and of the total body was calculated. The 3D kinematic analysis of the whole body demonstrated that joints such as shoulders, thoracic or hips did not reach their maximal angular velocity with a movement of flexion-extension, but with a combination of flexion-extension, abduction-adduction and internal-external rotation. The maximal kinetic energy of the total body was reached before clearing block (respectively, 537+/-59.3 J vs. 514.9+/-66.0 J; p< or =0.01). These results suggested that a better synchronization between the upper and lower limbs could increase the efficiency of pushing phase on the blocks. Besides, to understand low interindividual variances in the sprint start performance in elite athletes, a 3D complete body kinematic analysis shall be used.

  5. Metastatic liver tumour segmentation with a neural network-guided 3D deformable model.

    PubMed

    Vorontsov, Eugene; Tang, An; Roy, David; Pal, Christopher J; Kadoury, Samuel

    2017-01-01

    The segmentation of liver tumours in CT images is useful for the diagnosis and treatment of liver cancer. Furthermore, an accurate assessment of tumour volume aids in the diagnosis and evaluation of treatment response. Currently, segmentation is performed manually by an expert, and because of the time required, a rough estimate of tumour volume is often done instead. We propose a semi-automatic segmentation method that makes use of machine learning within a deformable surface model. Specifically, we propose a deformable model that uses a voxel classifier based on a multilayer perceptron (MLP) to interpret the CT image. The new deformable model considers vertex displacement towards apparent tumour boundaries and regularization that promotes surface smoothness. During operation, a user identifies the target tumour and the mesh then automatically delineates the tumour from the MLP processed image. The method was tested on a dataset of 40 abdominal CT scans with a total of 95 colorectal metastases collected from a variety of scanners with variable spatial resolution. The segmentation results are encouraging with a Dice similarity metric of [Formula: see text] and demonstrates that the proposed method can deal with highly variable data. This work motivates further research into tumour segmentation using machine learning with more data and deeper neural networks.

  6. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  7. Processing of noised residual stress phase maps by using a 3D phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Viotti, Matias R.; Fantin, Analucia V.; Albertazzi, Armando; Willemann, Daniel P.

    2013-07-01

    The measurement of residual stress by using digital speckle pattern interferometry (DSPI) combined with the hole drilling technique is a valuable and fast tool for integrity evaluation of civil structures and mechanical parts. However, in some cases, measured phase maps are badly corrupted by noise which makes phase unwrapping a difficult and unsuccessful task. By following recommendations given by the ASTM E837 standard, 20 consecutive hole steps should be performed for the measurement of non-uniform stresses. As a consequence, 20 difference phase maps along the hole depth will be available for the DSPI technique. An adaptive phase unwrapping algorithm could be used in order to unwrap images following paths localized along well modulated pixels and performing two dimensional phase unwrapping (following paths inside a difference phase map corresponding to a hole step) or 3D phase unwrapping (similar to a temporal phase unwrapping following paths located at well-modulated pixels in a previous or a subsequent hole image). Non-corrupted and corrupted hole-drilling tests were processed with a traditional phase unwrapping algorithm as well as with the proposed 3D approach. Comparisons between unwrapped phase maps and simulated ones have shown that the proposed method gave results with best accordance than 2D results.

  8. A new algorithm for determining 3D biplane imaging geometry: theory and implementation

    NASA Astrophysics Data System (ADS)

    Singh, Vikas; Xu, Jinhui; Hoffmann, Kenneth R.; Xu, Guang; Chen, Zhenming; Gopal, Anant

    2005-04-01

    Biplane imaging is a primary method for visual and quantitative assessment of the vasculature. A key problem called Imaging Geometry Determination problem (IGD for short) in this method is to determine the rotation-matrix R and the translation-vector t which relate the two coordinate systems. In this paper, we propose a new approach, called IG-Sieving, to calculate R and t using corresponding points in the two images. Our technique first generates an initial estimate of R and t from the gantry angles of the imaging system, and then optimizes them by solving an optimal-cell-search problem in a 6-D parametric space (three variables defining R plus the three variables of t). To efficiently find the optimal imaging geometry (IG) in 6-D, our approach divides the high dimensional search domain into a set of lower-dimensional regions, thereby reducing the optimal-cell-search problem to a set of optimization problems in 3D sub-spaces. For each such sub-space, our approach first applies efficient computational geometry techniques to identify "possibly-feasible"" IG"s, and then uses a criterion we call fall-in-number to sieve out good IGs. We show that in a bounded number of optimization steps, a (possibly infinite) set of near-optimal IGs can be determined. Simulation results indicate that our method can reconstruct 3D points with average 3D center-of-mass errors of about 0.8cm for input image-data errors as high as 0.1cm. More importantly, our algorithm provides a novel insight into the geometric structure of the solution-space, which could be exploited to significantly improve the accuracy of other biplane algorithms.

  9. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-31

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm 2.5 cm 2.5 cm]).

  10. Segmentation of densely populated cell nuclei from confocal image stacks using 3D non-parametric shape priors.

    PubMed

    Ong, Lee-Ling S; Wang, Mengmeng; Dauwels, Justin; Asada, H Harry

    2014-01-01

    An approach to jointly estimate 3D shapes and poses of stained nuclei from confocal microscopy images, using statistical prior information, is presented. Extracting nuclei boundaries from our experimental images of cell migration is challenging due to clustered nuclei and variations in their shapes. This issue is formulated as a maximum a posteriori estimation problem. By incorporating statistical prior models of 3D nuclei shapes into level set functions, the active contour evolutions applied on the images is constrained. A 3D alignment algorithm is developed to build the training databases and to match contours obtained from the images to them. To address the issue of aligning the model over multiple clustered nuclei, a watershed-like technique is used to detect and separate clustered regions prior to active contour evolution. Our method is tested on confocal images of endothelial cells in microfluidic devices, compared with existing approaches.

  11. Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.

    PubMed

    Ciofolo, Cybèle; Barillot, Christian

    2009-06-01

    We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.

  12. Registration of overlapping 3D point clouds using extracted line segments. (Polish Title: Rejestracja chmur punktów 3D w oparciu o wyodrębnione krawędzie)

    NASA Astrophysics Data System (ADS)

    Poręba, M.; Goulette, F.

    2014-12-01

    The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.

  13. Model-based 3D segmentation of the bones of joints in medical images

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey; Hirsch, Bruce E.; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A.

    2005-04-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of acquired images of the joint under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. A model-based strategy is proposed in this paper wherein a rigid model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. In other images of the joint, this model is used to search for the same bone by minimizing an energy functional that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations yielding true positive and false positive volume fractions in the range 89-97% and 0.2-0.7%. The method requires 1-2 minutes of operator time and 6-7 minutes of computer time, which makes it significantly more efficient than live wire - the only method currently available for the task.

  14. 3D segmentation of abdominal aorta from CT-scan and MR images.

    PubMed

    Duquette, Anthony Adam; Jodoin, Pierre-Marc; Bouchot, Olivier; Lalande, Alain

    2012-06-01

    We designed a generic method for segmenting the aneurismal sac of an abdominal aortic aneurysm (AAA) both from multi-slice MR and CT-scan examinations. It is a semi-automatic method requiring little human intervention and based on graph cut theory to segment the lumen interface and the aortic wall of AAAs. Our segmentation method works independently on MRI and CT-scan volumes and has been tested on a 44 patient dataset and 10 synthetic images. Segmentation and maximum diameter estimation were compared to manual tracing from 4 experts. An inter-observer study was performed in order to measure the variability range of a human observer. Based on three metrics (the maximum aortic diameter, the volume overlap and the Hausdorff distance) the variability of the results obtained by our method is shown to be similar to that of a human operator, both for the lumen interface and the aortic wall. As will be shown, the average distance obtained with our method is less than one standard deviation away from each expert, both for healthy subjects and for patients with AAA. Our semi-automatic method provides reliable contours of the abdominal aorta from CT-scan or MRI, allowing rapid and reproducible evaluations of AAA.

  15. Crowdsourcing the creation of image segmentation algorithms for connectomics

    PubMed Central

    Arganda-Carreras, Ignacio; Turaga, Srinivas C.; Berger, Daniel R.; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M.; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M.; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D.; Bas, Erhan; Uzunbas, Mustafa G.; Cardona, Albert; Schindelin, Johannes; Seung, H. Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge. PMID:26594156

  16. Crowdsourcing the creation of image segmentation algorithms for connectomics.

    PubMed

    Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.

  17. Bone canalicular network segmentation in 3D nano-CT images through geodesic voting and image tessellation

    NASA Astrophysics Data System (ADS)

    Zuluaga, Maria A.; Orkisz, Maciej; Dong, Pei; Pacureanu, Alexandra; Gouttenoire, Pierre-Jean; Peyrin, Françoise

    2014-05-01

    Recent studies emphasized the role of the bone lacuno-canalicular network (LCN) in the understanding of bone diseases such as osteoporosis. However, suitable methods to investigate this structure are lacking. The aim of this paper is to introduce a methodology to segment the LCN from three-dimensional (3D) synchrotron radiation nano-CT images. Segmentation of such structures is challenging due to several factors such as limited contrast and signal-to-noise ratio, partial volume effects and huge number of data that needs to be processed, which restrains user interaction. We use an approach based on minimum-cost paths and geodesic voting, for which we propose a fully automatic initialization scheme based on a tessellation of the image domain. The centroids of pre-segmented lacunæ are used as Voronoi-tessellation seeds and as start-points of a fast-marching front propagation, whereas the end-points are distributed in the vicinity of each Voronoi-region boundary. This initialization scheme was devised to cope with complex biological structures involving cells interconnected by multiple thread-like, branching processes, while the seminal geodesic-voting method only copes with tree-like structures. Our method has been assessed quantitatively on phantom data and qualitatively on real datasets, demonstrating its feasibility. To the best of our knowledge, presented 3D renderings of lacunæ interconnected by their canaliculi were achieved for the first time.

  18. Nodule Detection in a Lung Region that's Segmented with Using Genetic Cellular Neural Networks and 3D Template Matching with Fuzzy Rule Based Thresholding

    PubMed Central

    Osman, Onur; Ucan, Osman N.

    2008-01-01

    Objective The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Materials and Methods Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. Results The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Conclusion Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules. PMID:18253070

  19. Graph-cut Based Interactive Segmentation of 3D Materials-Science Images

    DTIC Science & Technology

    2014-04-26

    while still quickly and conveniently allowing manual addition and removal of segments in real -time, (2) multiple extensions to the interactive tools...inside the region, and – The mean intensity inside the region. These properties can be computed quickly, which fits well with the real -time...10), 1731–1744 (2000) 14. Cortes , C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) 15. Django Software Foundation: Django

  20. Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation

    NASA Astrophysics Data System (ADS)

    Firouzian, Azadeh; Manniesing, R.; Flach, Z. H.; Risselada, R.; van Kooten, F.; Sturkenboom, M. C. J. M.; van der Lugt, A.; Niessen, W. J.

    2010-03-01

    Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm, namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e. clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver variability in terms of accuracy.

  1. Image segmentation and registration for the analysis of joint motion from 3D MRI

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

    2006-03-01

    We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

  2. Indoor Localization Algorithms for an Ambulatory Human Operated 3D Mobile Mapping System

    SciTech Connect

    Corso, N; Zakhor, A

    2013-12-03

    Indoor localization and mapping is an important problem with many applications such as emergency response, architectural modeling, and historical preservation. In this paper, we develop an automatic, off-line pipeline for metrically accurate, GPS-denied, indoor 3D mobile mapping using a human-mounted backpack system consisting of a variety of sensors. There are three novel contributions in our proposed mapping approach. First, we present an algorithm which automatically detects loop closure constraints from an occupancy grid map. In doing so, we ensure that constraints are detected only in locations that are well conditioned for scan matching. Secondly, we address the problem of scan matching with poor initial condition by presenting an outlier-resistant, genetic scan matching algorithm that accurately matches scans despite a poor initial condition. Third, we present two metrics based on the amount and complexity of overlapping geometry in order to vet the estimated loop closure constraints. By doing so, we automatically prevent erroneous loop closures from degrading the accuracy of the reconstructed trajectory. The proposed algorithms are experimentally verified using both controlled and real-world data. The end-to-end system performance is evaluated using 100 surveyed control points in an office environment and obtains a mean accuracy of 10 cm. Experimental results are also shown on three additional datasets from real world environments including a 1500 meter trajectory in a warehouse sized retail shopping center.

  3. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  4. Computer-assisted liver tumor surgery using a novel semiautomatic and a hybrid semiautomatic segmentation algorithm.

    PubMed

    Zygomalas, Apollon; Karavias, Dionissios; Koutsouris, Dimitrios; Maroulis, Ioannis; Karavias, Dimitrios D; Giokas, Konstantinos; Megalooikonomou, Vasileios

    2016-05-01

    We developed a medical image segmentation and preoperative planning application which implements a semiautomatic and a hybrid semiautomatic liver segmentation algorithm. The aim of this study was to evaluate the feasibility of computer-assisted liver tumor surgery using these algorithms which are based on thresholding by pixel intensity value from initial seed points. A random sample of 12 patients undergoing elective high-risk hepatectomies at our institution was prospectively selected to undergo computer-assisted surgery using our algorithms (June 2013-July 2014). Quantitative and qualitative evaluation was performed. The average computer analysis time (segmentation, resection planning, volumetry, visualization) was 45 min/dataset. The runtime for the semiautomatic algorithm was <0.2 s/slice. Liver volumetric segmentation using the hybrid method was achieved in 12.9 s/dataset (SD ± 6.14). Mean similarity index was 96.2 % (SD ± 1.6). The future liver remnant volume calculated by the application showed a correlation of 0.99 to that calculated using manual boundary tracing. The 3D liver models and the virtual liver resections had an acceptable coincidence with the real intraoperative findings. The patient-specific 3D models produced using our semiautomatic and hybrid semiautomatic segmentation algorithms proved to be accurate for the preoperative planning in liver tumor surgery and effectively enhanced the intraoperative medical image guidance.

  5. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  6. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  7. Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad

    2015-03-01

    Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.

  8. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.

    PubMed

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-01-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  9. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  10. Multiscale Hessian fracture filtering for the enhancement and segmentation of narrow fractures in 3D image data

    NASA Astrophysics Data System (ADS)

    Voorn, Maarten; Exner, Ulrike; Rath, Alexander

    2013-08-01

    Narrow fractures—or more generally narrow planar features—can be difficult to extract from 3D image datasets, and available methods are often unsuitable or inapplicable. A proper extraction is however in many cases required for visualisation or future processing steps. We use the example of 3D X-ray micro-Computed Tomography (µCT) data of narrow fractures through core samples from a dolomitic hydrocarbon reservoir (Hauptdolomit below the Vienna Basin, Austria). The extraction and eventual binary segmentation of the fractures in these datasets is required for porosity determination and permeability modelling. In this paper, we present the multiscale Hessian fracture filtering technique for extracting narrow fractures from a 3D image dataset. The second-order information in the Hessian matrix is used to distinguish planar features from the dataset. Different results are obtained for different scales of analysis in the calculation of the Hessian matrix. By combining these various scales of analysis, the final output is multiscale; i.e. narrow fractures of different apertures are detected. The presented technique is implemented and made available as macro code for the multiplatform public domain image processing software ImageJ. Serial processing of blocks of data ensures that full 3D processing of relatively large datasets (example dataset: 1670×1670×1546 voxels) is possible on a desktop computer. Here, several hours of processing time are required, but interaction is only required in the beginning. Various post-processing steps (calibration, connectivity filtering, and binarisation) can be applied, depending on the goals of research. The multiscale Hessian fracture filtering technique provides very good results for extracting the narrow fractures in our example dataset, despite several drawbacks inherent to the use of the Hessian matrix. Although we apply the technique on a specific example, the general implementation makes the filter suitable for different

  11. Thrust fault segmentation and downward fault propagation in accretionary wedges: New Insights from 3D seismic reflection data

    NASA Astrophysics Data System (ADS)

    Orme, Haydn; Bell, Rebecca; Jackson, Christopher

    2016-04-01

    The shallow parts of subduction megathrust faults are typically thought to be aseismic and incapable of propagating seismic rupture. The 2011 Tohoku-Oki earthquake, however, ruptured all the way to the trench, proving that in some locations rupture can propagate through the accretionary wedge. An improved understanding of the structural character and physical properties of accretionary wedges is therefore crucial to begin to assess why such anomalously shallow seismic rupture occurs. Despite its importance, we know surprisingly little regarding the 3D geometry and kinematics of thrust network development in accretionary prisms, largely due to a lack of 3D seismic reflection data providing high-resolution, 3D images of entire networks. Thus our current understanding is largely underpinned by observations from analogue and numerical modelling, with limited observational data from natural examples. In this contribution we use PSDM, 3D seismic reflection data from the Nankai margin (3D Muroto dataset, available from the UTIG Academic Seismic Portal, Marine Geoscience Data System) to examine how imbricate thrust fault networks evolve during accretionary wedge growth. We unravel the evolution of faults within the protothrust and imbricate thrust zones by interpreting multiple horizons across faults and measuring fault displacement and fold amplitude along-strike; by doing this, we are able to investigate the three dimensional accrual of strain. We document a number of local displacement minima along-strike of faults, suggesting that, the protothrust and imbricate thrusts developed from the linkage of smaller, previously isolated fault segments. Although we often assume imbricate faults are likely to have propagated upwards from the décollement we show strong evidence for fault nucleation at shallow depths and downward propagation to intersect the décollement. The complex fault interactions documented here have implications for hydraulic compartmentalisation and pore

  12. 3D resistivity inversion using an improved Genetic Algorithm based on control method of mutation direction

    NASA Astrophysics Data System (ADS)

    Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.

    2012-12-01

    Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate

  13. Segmentation of the Aortic Valve Apparatus in 3D Echocardiographic Images: Deformable Modeling of a Branching Medial Structure.

    PubMed

    Pouch, Alison M; Tian, Sijie; Takabe, Manabu; Wang, Hongzhi; Yuan, Jiefu; Cheung, Albert T; Jackson, Benjamin M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2015-01-01

    3D echocardiographic (3DE) imaging is a useful tool for assessing the complex geometry of the aortic valve apparatus. Segmentation of this structure in 3DE images is a challenging task that benefits from shape-guided deformable modeling methods, which enable inter-subject statistical shape comparison. Prior work demonstrates the efficacy of using continuous medial representation (cm-rep) as a shape descriptor for valve leaflets. However, its application to the entire aortic valve apparatus is limited since the structure has a branching medial geometry that cannot be explicitly parameterized in the original cm-rep framework. In this work, we show that the aortic valve apparatus can be accurately segmented using a new branching medial modeling paradigm. The segmentation method achieves a mean boundary displacement of 0.6 ± 0.1 mm (approximately one voxel) relative to manual segmentation on 11 3DE images of normal open aortic valves. This study demonstrates a promising approach for quantitative 3DE analysis of aortic valve morphology.

  14. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  15. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  16. New segmentation algorithm for detecting tiny objects

    NASA Astrophysics Data System (ADS)

    Sun, Han; Yang, Jingyu; Ren, Mingwu; Gao, Jian-zhen

    2001-09-01

    Road cracks in the highway surface are very dangerous to traffic. They should be found and repaired as early as possible. So we designed the system of auto detecting cracks in the highway surface. In this system, there are several key steps. For instance, the first step, image recording should use high quality photography device because of the high speed. In addition, the original data is very large, so it needs huge storage media and some effective compress processing. As the illumination is affected by environment greatly, it is essential to do some preprocessing first, such as image reconstruction and enhancement. Because the cracks are too tiny to detect, segmentation is rather difficult. This paper here proposed a new segmentation method to detect such tiny cracks, even 2mm-width ones. In this algorithm, we first do edge detecting to get seeds for line growing in the following. Then delete the false ones and get the information of cracks. It is accurate and fast enough.

  17. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  18. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  19. Ellipsoid Segmentation Model for Analyzing Light-Attenuated 3D Confocal Image Stacks of Fluorescent Multi-Cellular Spheroids

    PubMed Central

    Barbier, Michaël; Jaensch, Steffen; Cornelissen, Frans; Vidic, Suzana; Gjerde, Kjersti; de Hoogt, Ronald; Graeser, Ralph; Gustin, Emmanuel; Chong, Yolanda T.

    2016-01-01

    In oncology, two-dimensional in-vitro culture models are the standard test beds for the discovery and development of cancer treatments, but in the last decades, evidence emerged that such models have low predictive value for clinical efficacy. Therefore they are increasingly complemented by more physiologically relevant 3D models, such as spheroid micro-tumor cultures. If suitable fluorescent labels are applied, confocal 3D image stacks can characterize the structure of such volumetric cultures and, for example, cell proliferation. However, several issues hamper accurate analysis. In particular, signal attenuation within the tissue of the spheroids prevents the acquisition of a complete image for spheroids over 100 micrometers in diameter. And quantitative analysis of large 3D image data sets is challenging, creating a need for methods which can be applied to large-scale experiments and account for impeding factors. We present a robust, computationally inexpensive 2.5D method for the segmentation of spheroid cultures and for counting proliferating cells within them. The spheroids are assumed to be approximately ellipsoid in shape. They are identified from information present in the Maximum Intensity Projection (MIP) and the corresponding height view, also known as Z-buffer. It alerts the user when potential bias-introducing factors cannot be compensated for and includes a compensation for signal attenuation. PMID:27303813

  20. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  1. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  2. Structural stereo matching of Laplacian-of-Gaussian contour segments for 3D perception

    NASA Technical Reports Server (NTRS)

    Boyer, K. L.; Sotak, G. E., Jr.

    1989-01-01

    The stereo correspondence problem is solved using Laplacian-of-Gaussian zero-crossing contours as a source of primitives for structural stereopsis, as opposed to traditional point-based algorithms. Up to 74 percent matching of candidate zero crossing points are being achieved on 240 x 246 images at small scales and large ranges of disparity, without coarse-to-fine tracking and without precise knowledge of the epipolar geometry. This approach should prove particularly useful in recovering the epipolar geometry automatically for stereo pairs for which it is unavailable a priori. Such situations occur in the extraction of terrain models from stereo aerial photographs.

  3. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  4. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to

  5. Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display.

    PubMed

    Zhong, Qing; Peng, Yifan; Li, Haifeng; Su, Chen; Shen, Weidong; Liu, Xu

    2013-07-01

    Both multiview and light-field reconstructions are proposed for a multiple-projector 3D display system. To compare the performance of the reconstruction algorithms in the same system, an optimized multiview reconstruction algorithm with sub-view-zones (SVZs) is proposed. The algorithm divided the conventional view zones in multiview display into several SVZs and allocates more view images. The optimized reconstruction algorithm unifies the conventional multiview reconstruction and light-field reconstruction algorithms, which can indicate the difference in performance when multiview reconstruction is changed to light-field reconstruction. A prototype consisting of 60 projectors with an arc diffuser as its screen is constructed to verify the algorithms. Comparison of different configurations of SVZs shows that light-field reconstruction provides large-scale 3D images with the smoothest motion parallax; thus it may provide better overall performance for large-scale 360° display than multiview reconstruction.

  6. CT and MRI Assessment and Characterization Using Segmentation and 3D Modeling Techniques: Applications to Muscle, Bone and Brain.

    PubMed

    Gargiulo, Paolo; Helgason, Thordur; Ramon, Ceon; Jr, Halldór Jónsson; Carraro, Ugo

    2014-03-31

    This paper reviews the novel use of CT and MRI data and image processing tools to segment and reconstruct tissue images in 3D to determine characteristics of muscle, bone and brain. This to study and simulate the structural changes occurring in healthy and pathological conditions as well as in response to clinical treatments. Here we report the application of this methodology to evaluate and quantify: 1. progression of atrophy in human muscle subsequent to permanent lower motor neuron (LMN) denervation, 2. muscle recovery as induced by functional electrical stimulation (FES), 3. bone quality in patients undergoing total hip replacement and 4. to model the electrical activity of the brain. Study 1: CT data and segmentation techniques were used to quantify changes in muscle density and composition by associating the Hounsfield unit values of muscle, adipose and fibrous connective tissue with different colors. This method was employed to monitor patients who have permanent muscle LMN denervation in the lower extremities under two different conditions: permanent LMN denervated not electrically stimulated and stimulated. Study 2: CT data and segmentation techniques were employed, however, in this work we assessed bone and muscle conditions in the pre-operative CT scans of patients scheduled to undergo total hip replacement. In this work, the overall anatomical structure, the bone mineral density (BMD) and compactness of quadriceps muscles and proximal femoral was computed to provide a more complete view for surgeons when deciding which implant technology to use. Further, a Finite element analysis provided a map of the strains around the proximal femur socket when solicited by typical stresses caused by an implant press fitting. Study 3 describes a method to model the electrical behavior of human brain using segmented MR images. The aim of the work is to use these models to predict the electrical activity of the human brain under normal and pathological conditions by

  7. Assessment of DICOM Viewers Capable of Loading Patient-specific 3D Models Obtained by Different Segmentation Platforms in the Operating Room.

    PubMed

    Lo Presti, Giuseppe; Carbone, Marina; Ciriaci, Damiano; Aramini, Daniele; Ferrari, Mauro; Ferrari, Vincenzo

    2015-10-01

    Patient-specific 3D models obtained by the segmentation of volumetric diagnostic images play an increasingly important role in surgical planning. Surgeons use the virtual models reconstructed through segmentation to plan challenging surgeries. Many solutions exist for the different anatomical districts and surgical interventions. The possibility to bring the 3D virtual reconstructions with native radiological images in the operating room is essential for fostering the use of intraoperative planning. To the best of our knowledge, current DICOM viewers are not able to simultaneously connect to the picture archiving and communication system (PACS) and import 3D models generated by external platforms to allow a straight integration in the operating room. A total of 26 DICOM viewers were evaluated: 22 open source and four commercial. Two DICOM viewers can connect to PACS and import segmentations achieved by other applications: Synapse 3D® by Fujifilm and OsiriX by University of Geneva. We developed a software network that converts diffuse visual tool kit (VTK) format 3D model segmentations, obtained by any software platform, to a DICOM format that can be displayed using OsiriX or Synapse 3D. Both OsiriX and Synapse 3D were suitable for our purposes and had comparable performance. Although Synapse 3D loads native images and segmentations faster, the main benefits of OsiriX are its user-friendly loading of elaborated images and it being both free of charge and open source.

  8. Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.

    2016-11-01

    Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n  =  4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p  <  0.05) when comparing two anatomic locations on the same day and when considering the same anatomic location at two separate times (i.e. before and after burn surgery). The study demonstrates an

  9. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  10. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  11. Full Waveform 3D Synthetic Seismic Algorithm for 1D Layered Anelastic Models

    NASA Astrophysics Data System (ADS)

    Schwaiger, H. F.; Aldridge, D. F.; Haney, M. M.

    2007-12-01

    Numerical calculation of synthetic seismograms for 1D layered earth models remains a significant aspect of amplitude-offset investigations, surface wave studies, microseismic event location approaches, and reflection interpretation or inversion processes. Compared to 3D finite-difference algorithms, memory demand and execution time are greatly reduced, enabling rapid generation of seismic data within workstation or laptop computational environments. We have developed a frequency-wavenumber forward modeling algorithm adapted to realistic 1D geologic media, for the purpose of calculating seismograms accurately and efficiently. The earth model consists of N layers bounded by two halfspaces. Each layer/halfspace is a homogeneous and isotropic anelastic (attenuative and dispersive) solid, characterized by a rectangular relaxation spectrum of absorption mechanisms. Compressional and shear phase speeds and quality factors are specified at a particular reference frequency. Solution methodology involves 3D Fourier transforming the three coupled, second- order, integro-differential equations for particle displacements to the frequency-horizontal wavenumber domain. An analytic solution of the resulting ordinary differential system is obtained. Imposition of welded interface conditions (continuity of displacement and stress) at all interfaces, as well as radiation conditions in the two halfspaces, yields a system of 6(N+1) linear algebraic equations for the coefficients in the ODE solution. An optimized inverse 2D Fourier transform to the space domain gives the seismic wavefield on a horizontal plane. Finally, three-component seismograms are obtained by accumulating frequency spectra at designated receiver positions on this plane, followed by a 1D inverse FFT from angular frequency ω to time. Stress-free conditions may be applied at the top or bottom interfaces, and seismic waves are initiated by force or moment density sources. Examples reveal that including attenuation

  12. Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Bartel, L. C.; Knox, H. A.

    2013-12-01

    Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common

  13. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  14. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  15. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  16. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    NASA Astrophysics Data System (ADS)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  17. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  18. Formulation, stability and application of a semi-coupled 3-D four-field algorithm

    SciTech Connect

    Kunz, R.F.; Siebert, B.W.; Cope, W.K.; Foster, N.F.; Antal, S.P.; Ettorre, S.M.

    1996-06-01

    A new 3-D four-field algorithm has been developed to predict general two-phase flows. Ensemble averaged transport equations of mass, momentum, energy and turbulence transport are solved for each field (continuous liquid, continuous vapor, disperse liquid, disperse vapor). This four-field structure allows for analysis of adiabatic and boiling systems which contain flow regimes from bubbly through annular. Interfacial mass, momentum, turbulence and heat transfer models provide coupling between phases. A new semi-coupled implicit method is utilized to solve the set of 25 equations which arise in the formulation. In this paper, three important component numerical strategies employed in the method are summarized. These include: (1) incorporation of interfacial momentum force terms in the control volume face flux reconstruction, (2) phase coupling at the linear solver level, and in the pressure-velocity coupling itself and (3) a multi-step Jacobi block correction scheme for efficient solution of the pressure-Poisson equation. The necessity/effectiveness of these strategies is demonstrated in applications to realistic engineering flows. Though some heated flow test cases are considered, the particular numerics discussed here are germane to adiabatic flows with and without mass transfer.

  19. Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems

    SciTech Connect

    Chen, Hsin-Chu; Warsi, N.A.; Sameh, A.

    1996-12-31

    In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).

  20. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    PubMed

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  1. Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages.

    PubMed

    Nunez-Iglesias, Juan; Kennedy, Ryan; Plaza, Stephen M; Chakraborty, Anirban; Katz, William T

    2014-01-01

    The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.

  2. Automated 2D-3D registration of a radiograph and a cone beam CT using line-segment enhancement

    SciTech Connect

    Munbodh, Reshma; Jaffray, David A.; Moseley, Douglas J.; Chen Zhe; Knisely, Jonathan P.S.; Cathier, Pascal; Duncan, James S.

    2006-05-15

    The objective of this study was to develop a fully automated two-dimensional (2D)-three-dimensional (3D) registration framework to quantify setup deviations in prostate radiation therapy from cone beam CT (CBCT) data and a single AP radiograph. A kilovoltage CBCT image and kilovoltage AP radiograph of an anthropomorphic phantom of the pelvis were acquired at 14 accurately known positions. The shifts in the phantom position were subsequently estimated by registering digitally reconstructed radiographs (DRRs) from the 3D CBCT scan to the AP radiographs through the correlation of enhanced linear image features mainly representing bony ridges. Linear features were enhanced by filtering the images with ''sticks,'' short line segments which are varied in orientation to achieve the maximum projection value at every pixel in the image. The mean (and standard deviations) of the absolute errors in estimating translations along the three orthogonal axes in millimeters were 0.134 (0.096) AP(out-of-plane), 0.021 (0.023) ML and 0.020 (0.020) SI. The corresponding errors for rotations in degrees were 0.011 (0.009) AP, 0.029 (0.016) ML (out-of-plane), and 0.030 (0.028) SI (out-of-plane). Preliminary results with megavoltage patient data have also been reported. The results suggest that it may be possible to enhance anatomic features that are common to DRRs from a CBCT image and a single AP radiography of the pelvis for use in a completely automated and accurate 2D-3D registration framework for setup verification in prostate radiotherapy. This technique is theoretically applicable to other rigid bony structures such as the cranial vault or skull base and piecewise rigid structures such as the spine.

  3. 3D segmentation and quantification of magnetic resonance data: application to the osteonecrosis of the femoral head

    NASA Astrophysics Data System (ADS)

    Klifa, Catherine S.; Lynch, John A.; Zaim, Souhil; Genant, Harry K.

    1999-05-01

    The general objective of our study is the development of a clinically robust three-dimensional segmentation and quantification technique of Magnetic Resonance (MR) data, for the objective and quantitative evaluation of the osteonecrosis (ON) of the femoral head. This method will help evaluate the effects of joint preserving treatments for femoral head osteonecrosis from MR data. The disease is characterized by tissue changes (death of bone and marrow cells) within the weight-bearing portion of the femoral head. Due to the fuzzy appearance of lesion tissues and their different intensity patterns in various MR sequences, we proposed a semi-automatic multispectral segmentation of MR data introducing data constraints (anatomical and geometrical) and using a classical K-means unsupervised clustering algorithm. The method was applied on ON patient data. Results of volumetric measurements and configuration of various tissues obtained with the semi- automatic method were compared with quantitative results delineated by a trained radiologist.

  4. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  5. The PCNN adaptive segmentation algorithm based on visual perception

    NASA Astrophysics Data System (ADS)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  6. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  7. A 2D to 3D ultrasound image registration algorithm for robotically assisted laparoscopic radical prostatectomy

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Mehdi; Pautler, Stephen E.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    Robotically assisted laparoscopic radical prostatectomy (RARP) is an effective approach to resect the diseased organ, with stereoscopic views of the targeted tissue improving the dexterity of the surgeons. However, since the laparoscopic view acquires only the surface image of the tissue, the underlying distribution of the cancer within the organ is not observed, making it difficult to make informed decisions on surgical margins and sparing of neurovascular bundles. One option to address this problem is to exploit registration to integrate the laparoscopic view with images of pre-operatively acquired dynamic contrast enhanced (DCE) MRI that can demonstrate the regions of malignant tissue within the prostate. Such a view potentially allows the surgeon to visualize the location of the malignancy with respect to the surrounding neurovascular structures, permitting a tissue-sparing strategy to be formulated directly based on the observed tumour distribution. If the tumour is close to the capsule, it may be determined that the adjacent neurovascular bundle (NVB) needs to be sacrificed within the surgical margin to ensure that any erupted tumour was resected. On the other hand, if the cancer is sufficiently far from the capsule, one or both NVBs may be spared. However, in order to realize such image integration, the pre-operative image needs to be fused with the laparoscopic view of the prostate. During the initial stages of the operation, the prostate must be tracked in real time so that the pre-operative MR image remains aligned with patient coordinate system. In this study, we propose and investigate a novel 2D to 3D ultrasound image registration algorithm to track the prostate motion with an accuracy of 2.68+/-1.31mm.

  8. Poroelastic Wave Propagation With a 3D Velocity-Stress-Pressure Finite-Difference Algorithm

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Symons, N. P.; Bartel, L. C.

    2004-12-01

    Seismic wave propagation within a three-dimensional, heterogeneous, isotropic poroelastic medium is numerically simulated with an explicit, time-domain, finite-difference algorithm. A system of thirteen, coupled, first-order, partial differential equations is solved for the particle velocity vector components, the stress tensor components, and the pressure associated with solid and fluid constituents of the two-phase continuum. These thirteen dependent variables are stored on staggered temporal and spatial grids, analogous to the scheme utilized for solution of the conventional velocity-stress system of isotropic elastodynamics. Centered finite-difference operators possess 2nd-order accuracy in time and 4th-order accuracy in space. Seismological utility is enhanced by an optional stress-free boundary condition applied on a horizontal plane representing the earth's surface. Absorbing boundary conditions are imposed on the flanks of the 3D spatial grid via a simple wavefield amplitude taper approach. A massively parallel computational implementation, utilizing the spatial domain decomposition strategy, allows investigation of large-scale earth models and/or broadband wave propagation within reasonable execution times. Initial algorithm testing indicates that a point force density and/or moment density source activated within a poroelastic medium generates diverging fast and slow P waves (and possibly an S-wave)in accord with Biot theory. Solid and fluid particle velocities are in-phase for the fast P-wave, whereas they are out-of-phase for the slow P-wave. Conversions between all wave types occur during reflection and transmission at interfaces. Thus, although the slow P-wave is regarded as difficult to detect experimentally, its presence is strongly manifest within the complex of waves generated at a lithologic or fluid boundary. Very fine spatial and temporal gridding are required for high-fidelity representation of the slow P-wave, without inducing excessive

  9. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  10. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  11. A rapid and efficient 2D/3D nuclear segmentation method for analysis of early mouse embryo and stem cell image data.

    PubMed

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muñoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-03-11

    Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses.

  12. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, Basic Algorithm and Preliminary results

    NASA Astrophysics Data System (ADS)

    Jung, Joon-Hee; Arakawa, Akio

    2010-04-01

    A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D) multi-scale modeling framework (MMF), is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM). It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse-resolution 3D CRM

  13. Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information.

    PubMed

    Klein, Stefan; van der Heide, Uulke A; Lips, Irene M; van Vulpen, Marco; Staring, Marius; Pluim, Josien P W

    2008-04-01

    An automatic method for delineating the prostate (including the seminal vesicles) in three-dimensional magnetic resonance scans is presented. The method is based on nonrigid registration of a set of prelabeled atlas images. Each atlas image is nonrigidly registered with the target patient image. Subsequently, the deformed atlas label images are fused to yield a single segmentation of the patient image. The proposed method is evaluated on 50 clinical scans, which were manually segmented by three experts. The Dice similarity coefficient (DSC) is used to quantify the overlap between the automatic and manual segmentations. We investigate the impact of several factors on the performance of the segmentation method. For the registration, two similarity measures are compared: Mutual information and a localized version of mutual information. The latter turns out to be superior (median DeltaDSC approximately equal 0.02, p < 0.01 with a paired two-sided Wilcoxon test) and comes at no added computational cost, thanks to the use of a novel stochastic optimization scheme. For the atlas fusion step we consider a majority voting rule and the "simultaneous truth and performance level estimation" algorithm, both with and without a preceding atlas selection stage. The differences between the various fusion methods appear to be small and mostly not statistically significant (p > 0.05). To assess the influence of the atlas composition, two atlas sets are compared. The first set consists of 38 scans of healthy volunteers. The second set is constructed by a leave-one-out approach using the 50 clinical scans that are used for evaluation. The second atlas set gives substantially better performance (DeltaDSC=0.04, p < 0.01), stressing the importance of a careful atlas definition. With the best settings, a median DSC of around 0.85 is achieved, which is close to the median interobserver DSC of 0.87. The segmentation quality is especially good at the prostate-rectum interface, where the

  14. Feature measures for the segmentation of neuronal membrane using a machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Iftikhar, Saadia; Godil, Afzal

    2013-12-01

    In this paper, we present a Support Vector Machine (SVM) based pixel classifier for a semi-automated segmentation algorithm to detect neuronal membrane structures in stacks of electron microscopy images of brain tissue samples. This algorithm uses high-dimensional feature spaces extracted from center-surrounded patches, and some distinct edge sensitive features for each pixel in the image, and a training dataset for the segmentation of neuronal membrane structures and background. Some threshold conditions are later applied to remove small regions, which are below a certain threshold criteria, and morphological operations, such as the filling of the detected objects, are done to get compactness in the objects. The performance of the segmentation method is calculated on the unseen data by using three distinct error measures: pixel error, wrapping error, and rand error, and also a pixel by pixel accuracy measure with their respective ground-truth. The trained SVM classifier achieves the best precision level in these three distinct errors at 0.23, 0.016 and 0.15, respectively; while the best accuracy using pixel by pixel measure reaches 77% on the given dataset. The results presented here are one step further towards exploring possible ways to solve these hard problems, such as segmentation in medical image analysis. In the future, we plan to extend it as a 3D segmentation approach for 3D datasets to not only retain the topological structures in the dataset but also for the ease of further analysis.

  15. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  16. 3D-segmentation of the 18F-choline PET signal for target volume definition in radiation therapy of the prostate.

    PubMed

    Ciernik, I Frank; Brown, Derek W; Schmid, Daniel; Hany, Thomas; Egli, Peter; Davis, J Bernard

    2007-02-01

    Volumetric assessment of PET signals becomes increasingly relevant for radiotherapy (RT) planning. Here, we investigate the utility of 18F-choline PET signals to serve as a structure for semi-automatic segmentation for forward treatment planning of prostate cancer. 18F-choline PET and CT scans of ten patients with histologically proven prostate cancer without extracapsular growth were acquired using a combined PET/CT scanner. Target volumes were manually delineated on CT images using standard software. Volumes were also obtained from 18F-choline PET images using an asymmetrical segmentation algorithm. PTVs were derived from CT 18F-choline PET based clinical target volumes (CTVs) by automatic expansion and comparative planning was performed. As a read-out for dose given to non-target structures, dose to the rectal wall was assessed. Planning target volumes (PTVs) derived from CT and 18F-choline PET yielded comparable results. Optimal matching of CT and 18F-choline PET derived volumes in the lateral and cranial-caudal directions was obtained using a background-subtracted signal thresholds of 23.0+/-2.6%. In antero-posterior direction, where adaptation compensating for rectal signal overflow was required, optimal matching was achieved with a threshold of 49.5+/-4.6%. 3D-conformal planning with CT or 18F-choline PET resulted in comparable doses to the rectal wall. Choline PET signals of the prostate provide adequate spatial information amendable to standardized asymmetrical region growing algorithms for PET-based target volume definition for external beam RT.

  17. LC-lens array with light field algorithm for 3D biomedical applications

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Hsieh, Po-Yuan; Hassanfiroozi, Amir; Martinez, Manuel; Javidi, Bahram; Chu, Chao-Yu; Hsuan, Yun; Chu, Wen-Chun

    2016-03-01

    In this paper, liquid crystal lens (LC-lens) array was utilized in 3D bio-medical applications including 3D endoscope and light field microscope. Comparing with conventional plastic lens array, which was usually placed in 3D endoscope or light field microscope system to record image disparity, our LC-lens array has higher flexibility of electrically changing its focal length. By using LC-lens array, the working distance and image quality of 3D endoscope and microscope could be enhanced. Furthermore, the 2D/3D switching ability could be achieved if we turn off/on the electrical power on LClens array. In 3D endoscope case, a hexagonal micro LC-lens array with 350um diameter was placed at the front end of a 1mm diameter endoscope. With applying electric field on LC-lens array, the 3D specimen would be recorded as from seven micro-cameras with different disparity. We could calculate 3D construction of specimen with those micro images. In the other hand, if we turn off the electric field on LC-lens array, the conventional high resolution 2D endoscope image would be recorded. In light field microscope case, the LC-lens array was placed in front of the CMOS sensor. The main purpose of LC-lens array is to extend the refocusing distance of light field microscope, which is usually very narrow in focused light field microscope system, by montaging many light field images sequentially focusing on different depth. With adjusting focal length of LC-lens array from 2.4mm to 2.9mm, the refocusing distance was extended from 1mm to 11.3mm. Moreover, we could use a LC wedge to electrically shift the optics axis and increase the resolution of light field.

  18. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  19. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  20. A region growing vessel segmentation algorithm based on spectrum information.

    PubMed

    Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo

    2013-01-01

    We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.

  1. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  2. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, L.F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  3. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  4. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  5. Robust and accurate star segmentation algorithm based on morphology

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Lei, Liu; Guangjun, Zhang

    2016-06-01

    Star tracker is an important instrument of measuring a spacecraft's attitude; it measures a spacecraft's attitude by matching the stars captured by a camera and those stored in a star database, the directions of which are known. Attitude accuracy of star tracker is mainly determined by star centroiding accuracy, which is guaranteed by complete star segmentation. Current algorithms of star segmentation cannot suppress different interferences in star images and cannot segment stars completely because of these interferences. To solve this problem, a new star target segmentation algorithm is proposed on the basis of mathematical morphology. The proposed algorithm utilizes the margin structuring element to detect small targets and the opening operation to suppress noises, and a modified top-hat transform is defined to extract stars. A combination of three different structuring elements is utilized to define a new star segmentation algorithm, and the influence of three different structural elements on the star segmentation results is analyzed. Experimental results show that the proposed algorithm can suppress different interferences and segment stars completely, thus providing high star centroiding accuracy.

  6. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  7. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods

    NASA Astrophysics Data System (ADS)

    van Velden, Floris H. P.; Kloet, Reina W.; van Berckel, Bart N. M.; Wolfensberger, Saskia P. A.; Lammertsma, Adriaan A.; Boellaard, Ronald

    2008-06-01

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  8. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  9. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  10. Segmentation of thermographic images of hands using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Payel; Mitchell, Melanie; Gold, Judith

    2010-01-01

    This paper presents a new technique for segmenting thermographic images using a genetic algorithm (GA). The individuals of the GA also known as chromosomes consist of a sequence of parameters of a level set function. Each chromosome represents a unique segmenting contour. An initial population of segmenting contours is generated based on the learned variation of the level set parameters from training images. Each segmenting contour (an individual) is evaluated for its fitness based on the texture of the region it encloses. The fittest individuals are allowed to propagate to future generations of the GA run using selection, crossover and mutation. The dataset consists of thermographic images of hands of patients suffering from upper extremity musculo-skeletal disorders (UEMSD). Thermographic images are acquired to study the skin temperature as a surrogate for the amount of blood flow in the hands of these patients. Since entire hands are not visible on these images, segmentation of the outline of the hands on these images is typically performed by a human. In this paper several different methods have been tried for segmenting thermographic images: Gabor-wavelet-based texture segmentation method, the level set method of segmentation and our GA which we termed LSGA because it combines level sets with genetic algorithms. The results show a comparative evaluation of the segmentation performed by all the methods. We conclude that LSGA successfully segments entire hands on images in which hands are only partially visible.

  11. Computer-aided classification of liver tumors in 3D ultrasound images with combined deformable model segmentation and support vector machine

    NASA Astrophysics Data System (ADS)

    Lee, Myungeun; Kim, Jong Hyo; Park, Moon Ho; Kim, Ye-Hoon; Seong, Yeong Kyeong; Cho, Baek Hwan; Woo, Kyoung-Gu

    2014-03-01

    In this study, we propose a computer-aided classification scheme of liver tumor in 3D ultrasound by using a combination of deformable model segmentation and support vector machine. For segmentation of tumors in 3D ultrasound images, a novel segmentation model was used which combined edge, region, and contour smoothness energies. Then four features were extracted from the segmented tumor including tumor edge, roundness, contrast, and internal texture. We used a support vector machine for the classification of features. The performance of the developed method was evaluated with a dataset of 79 cases including 20 cysts, 20 hemangiomas, and 39 hepatocellular carcinomas, as determined by the radiologist's visual scoring. Evaluation of the results showed that our proposed method produced tumor boundaries that were equal to or better than acceptable in 89.8% of cases, and achieved 93.7% accuracy in classification of cyst and hemangioma.

  12. MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images.

    PubMed

    Yuan, Xiaosong; Trachtenberg, Joshua T; Potter, Steve M; Roysam, Badrinath

    2009-12-01

    This paper presents a method for improved automatic delineation of dendrites and spines from three-dimensional (3-D) images of neurons acquired by confocal or multi-photon fluorescence microscopy. The core advance presented here is a direct grayscale skeletonization algorithm that is constrained by a structural complexity penalty using the minimum description length (MDL) principle, and additional neuroanatomy-specific constraints. The 3-D skeleton is extracted directly from the grayscale image data, avoiding errors introduced by image binarization. The MDL method achieves a practical tradeoff between the complexity of the skeleton and its coverage of the fluorescence signal. Additional advances include the use of 3-D spline smoothing of dendrites to improve spine detection, and graph-theoretic algorithms to explore and extract the dendritic structure from the grayscale skeleton using an intensity-weighted minimum spanning tree (IW-MST) algorithm. This algorithm was evaluated on 30 datasets organized in 8 groups from multiple laboratories. Spines were detected with false negative rates less than 10% on most datasets (the average is 7.1%), and the average false positive rate was 11.8%. The software is available in open source form.

  13. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    PubMed

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  14. The optimizations of CGH generation algorithms based on multiple GPUs for 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Yang, Dan; Liu, Juan; Zhang, Yingxi; Li, Xin; Wang, Yongtian

    2016-10-01

    Holographic display has been considered as a promising display technology. Currently, low-speed generation of holograms with big holographic data is one of crucial bottlenecks for three dimensional (3D) dynamic holographic display. To solve this problem, the acceleration method computation platform is presented based on look-up table point source method. The computer generated holograms (CGHs) acquisition is sped up by offline file loading and inline calculation optimization, where a pure phase CGH with gigabyte data is encoded to record an object with 10 MB sampling data. Both numerical simulation and optical experiment demonstrate that the CGHs with 1920×1080 resolution by the proposed method can be applied to the 3D objects reconstruction with high quality successfully. It is believed that the CGHs with huge data can be generated by the proposed method with high speed for 3D dynamic holographic display in near future.

  15. SU-E-T-793: Validation of COMPASS 3D Dosimetry as Pre Treatment Verification with Commercial TPS Algorithms

    SciTech Connect

    Vikraman, S; Ramu, M; Karrthick, Kp; Rajesh, T; Senniandavar, V; Sambasivaselli, R; Maragathaveni, S; Dhivya, N; Tejinder, K; Manigandan, D; Muthukumaran, M

    2015-06-15

    Purpose: The purpose of this study was to validate the advent of COMPASS 3D dosimetry as a routine pre treatment verification tool with commercially available CMS Monaco and Oncentra Masterplan planning system. Methods: Twenty esophagus patients were selected for this study. All these patients underwent radical VMAT treatment in Elekta Linac and plans were generated in Monaco v5.0 with MonteCarlo(MC) dose calculation algorithm. COMPASS 3D dosimetry comprises an advanced dose calculation algorithm of collapsed cone convolution(CCC). To validate CCC algorithm in COMPASS, The DICOM RT Plans generated using Monaco MC algorithm were transferred to Oncentra Masterplan v4.3 TPS. Only final dose calculations were performed using CCC algorithm with out optimization in Masterplan planning system. It is proven that MC algorithm is an accurate algorithm and obvious that there will be a difference with MC and CCC algorithms. Hence CCC in COMPASS should be validated with other commercially available CCC algorithm. To use the CCC as pretreatment verification tool with reference to MC generated treatment plans, CCC in OMP and CCC in COMPASS were validated using dose volume based indices such as D98, D95 for target volumes and OAR doses. Results: The point doses for open beams were observed <1% with reference to Monaco MC algorithms. Comparisons of CCC(OMP) Vs CCC(COMPASS) showed a mean difference of 1.82%±1.12SD and 1.65%±0.67SD for D98 and D95 respectively for Target coverage. Maximum point dose of −2.15%±0.60SD difference was observed in target volume. The mean lung dose of −2.68%±1.67SD was noticed between OMP and COMPASS. The maximum point doses for spinal cord were −1.82%±0.287SD. Conclusion: In this study, the accuracy of CCC algorithm in COMPASS 3D dosimetry was validated by compared with CCC algorithm in OMP TPS. Dose calculation in COMPASS is feasible within < 2% in comparison with commercially available TPS algorithms.

  16. An ant colony optimisation algorithm for the 2D and 3D hydrophobic polar protein folding problem

    PubMed Central

    Shmygelska, Alena; Hoos, Holger H

    2005-01-01

    Background The protein folding problem is a fundamental problems in computational molecular biology and biochemical physics. Various optimisation methods have been applied to formulations of the ab-initio folding problem that are based on reduced models of protein structure, including Monte Carlo methods, Evolutionary Algorithms, Tabu Search and hybrid approaches. In our work, we have introduced an ant colony optimisation (ACO) algorithm to address the non-deterministic polynomial-time hard (NP-hard) combinatorial problem of predicting a protein's conformation from its amino acid sequence under a widely studied, conceptually simple model – the 2-dimensional (2D) and 3-dimensional (3D) hydrophobic-polar (HP) model. Results We present an improvement of our previous ACO algorithm for the 2D HP model and its extension to the 3D HP model. We show that this new algorithm, dubbed ACO-HPPFP-3, performs better than previous state-of-the-art algorithms on sequences whose native conformations do not contain structural nuclei (parts of the native fold that predominantly consist of local interactions) at the ends, but rather in the middle of the sequence, and that it generally finds a more diverse set of native conformations. Conclusions The application of ACO to this bioinformatics problem compares favourably with specialised, state-of-the-art methods for the 2D and 3D HP protein folding problem; our empirical results indicate that our rather simple ACO algorithm scales worse with sequence length but usually finds a more diverse ensemble of native states. Therefore the development of ACO algorithms for more complex and realistic models of protein structure holds significant promise. PMID:15710037

  17. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  18. Reconstruction 3D des structures adjacentes de l'articulation de la hanche par une segmentation multi-structures a l'aide des maillages surfaciques triangulaires

    NASA Astrophysics Data System (ADS)

    Meghoufel, Brahim

    A new 3D reconstruction technique of the two adjacent structures forming the hip joint from the 3D CT-scans images has been developed. The femoral head and the acetabulum are reconstructed using a 3D multi-structure segmentation method for the adjacent surfaces which is based on the use of a 3D triangular surface meshes. This method begins with a preliminary hierarchical segmentation of the two structures, using one triangular mesh for each structure. The two resulting 3D meshes of the hierarchical segmentation are deployed into two planar 2D surfaces. We have used the umbrella deployment to deploy the femoral head mesh, and the parameterization 3D/2D to deploy the acetabulum mesh. The two planar generated surfaces are used to deploy the CT-scan volume around each structure. The surface of each structure is nearly planar in the corresponding deployed volume. The iterative method of minimal surfaces ensures the optimal identification of both sought surfaces from the deployed volumes. The last step of the 3D reconstruction method aims at detecting and correcting the overlap between the two structures. This 3D reconstruction method has been validated using a data base of 10 3D CT-scan images. The results of the 3D reconstructions seem satisfactory. The precision errors of these 3D reconstructions have been quantified by comparing the 3D reconstructions with an available manual gold standard. The errors resulting from the quantification are better than those available in the literature; the mean of those errors is 0,83 +/- 0,25 mm for acetabulum and 0,70 +/- 0,17 mm for the femoral head. The mean execution time of the 3D reconstruction of the two structures forming the hip joint has been estimated at approximately 3,0 +/- 0,3 min . The proposed method shows the potential of the solution which the image processing can provide to the surgeons in order to achieve their routine tasks. Such a method can be applied to every imaging modality.

  19. Diagnostic algorithm: how to make use of new 2D, 3D and 4D ultrasound technologies in breast imaging.

    PubMed

    Weismann, C F; Datz, L

    2007-11-01

    The aim of this publication is to present a time saving diagnostic algorithm consisting of two-dimensional (2D), three-dimensional (3D) and four-dimensional (4D) ultrasound (US) technologies. This algorithm of eight steps combines different imaging modalities and render modes which allow a step by step analysis of 2D, 3D and 4D diagnostic criteria. Advanced breast US systems with broadband high frequency linear transducers, full digital data management and high resolution are the actual basis for two-dimensional breast US studies in order to detect early breast cancer (step 1). The continuous developments of 2D US technologies including contrast resolution imaging (CRI) and speckle reduction imaging (SRI) have a direct influence on the high quality of three-dimensional and four-dimensional presentation of anatomical breast structures and pathological details. The diagnostic options provided by static 3D volume datasets according to US BI-RADS analogue assessment, concerning lesion shape, orientation, margin, echogenic rim sign, lesion echogenicity, acoustic transmission, associated calcifications, 3D criteria of the coronal plane, surrounding tissue composition (step 2) and lesion vascularity (step 6) are discussed. Static 3D datasets offer the combination of long axes distance measurements and volume calculations, which are the basis for an accurate follow-up in BI-RADS II and BI-RADS III lesions (step 3). Real time 4D volume contrast imaging (VCI) is able to demonstrate tissue elasticity (step 5). Glass body rendering is a static 3D tool which presents greyscale and colour information to study the vascularity and the vascular architecture of a lesion (step 6). Tomographic ultrasound imaging (TUI) is used for a slice by slice documentation in different investigation planes (A-,B- or C-plane) (steps 4 and 7). The final step 8 uses the panoramic view technique (XTD-View) to document the localisation within the breast and to make the position of a lesion simply

  20. Segmentation algorithms for ear image data towards biomechanical studies.

    PubMed

    Ferreira, Ana; Gentil, Fernanda; Tavares, João Manuel R S

    2014-01-01

    In recent years, the segmentation, i.e. the identification, of ear structures in video-otoscopy, computerised tomography (CT) and magnetic resonance (MR) image data, has gained significant importance in the medical imaging area, particularly those in CT and MR imaging. Segmentation is the fundamental step of any automated technique for supporting the medical diagnosis and, in particular, in biomechanics studies, for building realistic geometric models of ear structures. In this paper, a review of the algorithms used in ear segmentation is presented. The review includes an introduction to the usually biomechanical modelling approaches and also to the common imaging modalities. Afterwards, several segmentation algorithms for ear image data are described, and their specificities and difficulties as well as their advantages and disadvantages are identified and analysed using experimental examples. Finally, the conclusions are presented as well as a discussion about possible trends for future research concerning the ear segmentation.

  1. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  2. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  3. A Small-Scale 3D Imaging Platform for Algorithm Performance Evaluation

    DTIC Science & Technology

    2007-06-01

    object tracking systems can imitate the 3D depth perception experienced in human vision by using the binocular disparity between the left and right...television, security monitoring, medical endoscopy, modern astronomy and video conferencing applications [4]. The newly discovered technology demonstrated

  4. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  5. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.

  6. A 3D Polar Processing Algorithm for Scale Model UHF ISAR Imaging

    DTIC Science & Technology

    2006-05-01

    5 in order to allow visualization of the target’s main scattering features. The low level intensity in the imagery is represented by the color green ...imagery, one may observe higher level colors behind the low level green surfaces. Considering the relatively long wavelengths used in the 3D UHF ISAR...Lundberg, P. Follo, P. Frolind, and A. Gustavsson , “Performance of VHF-band SAR change detection for wide-area surveillance of concealed ground

  7. Incorporating Digisonde Traces into the Ionospheric Data Assimilation Three Dimensional (IDA3D) Algorithm

    DTIC Science & Technology

    2006-05-11

    digisonde data, two days of data from the Digital Ionogram DataBase (http://ulcar.uml.edu/DIDBase/), the largest digisonde database, were visually...examined. These data were processed by the Automatic Real Time Ionogram Scaler with True Height (ARTIST) [Reinisch and Huang, 1983] program into electron... ionograms are available for comparison. The first test run of the IDA3D used only O-mode autoscaled virtual height profiles from five different digisondes

  8. Performance analysis of 3-D shape measurement algorithm with a short baseline projector-camera system.

    PubMed

    Liu, Jianyang; Li, Youfu

    A number of works for 3-D shape measurement based on structured light have been well-studied in the last decades. A common way to model the system is to use the binocular stereovision-like model. In this model, the projector is treated as a camera, thus making a projector-camera-based system unified with a well-established traditional binocular stereovision system. After calibrating the projector and camera, a 3-D shape information is obtained by conventional triangulation. However, in such a stereovision-like system, the short baseline problem exists and limits the measurement accuracy. Hence, in this work, we present a new projecting-imaging model based on fringe projection profilometry (FPP). In this model, we first derive a rigorous mathematical relationship that exists between the height of an object's surface, the phase difference distribution map, and the parameters of the setup. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline's length, affects the 3-D shape measurement accuracy using our proposed model. We provide an extensive uncertainty analysis on the proposed model through partial derivative analysis, relative error analysis, and sensitivity analysis. Moreover, the Monte Carlo simulation experiment is also conducted which shows that the measurement performance of the projector-camera system has a short baseline.

  9. Conservative multizonal interface algorithm for the 3-D Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Molvik, G. A.

    1991-01-01

    A conservative zonal interface algorithm using features of both structured and unstructured mesh CFD technology is presented. The flow solver within each of the zones is based on structured mesh CFD technology. The interface algorithm was implemented into two three-dimensional Navier-Stokes finite volume codes and was found to yield good results.

  10. A genetic algorithm particle pairing technique for 3D velocity field extraction in holographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Sheng, J.; Meng, H.

    This research explores a novel technique, using Genetic Algorithm Particle Pairing (GAPP) to extract three-dimensional (3D) velocity fields of complex flows. It is motivated by Holographic Particle Image Velocimetry (HPIV), in which intrinsic speckle noise hinders the achievement of high particle density required for conventional correlation methods in extracting 3D velocity fields, especially in regions with large velocity gradients. The GA particle pairing method maps particles recorded at the first exposure to those at the second exposure in a 3D space, providing one velocity vector for each particle pair instead of seeking statistical averaging. Hence, particle pairing can work with sparse seeding and complex 3D velocity fields. When dealing with a large number of particles from two instants, however, the accuracy of pairing results and processing speed become major concerns. Using GA's capability to search a large solution space parallelly, our algorithm can efficiently find the best mapping scenarios among a large number of possible particle pairing schemes. During GA iterations, different pairing schemes or solutions are evaluated based on fluid dynamics. Two types of evaluation functions are proposed, tested, and embedded into the GA procedures. Hence, our Genetic Algorithm Particle Pairing (GAPP) technique is characterized by robustness in velocity calculation, high spatial resolution, good parallelism in handling large data sets, and high processing speed on parallel architectures. It has been successfully tested on a simple HPIV measurement of a real trapped vortex flow as well as a series of numerical experiments. In this paper, we introduce the principle of GAPP, analyze its performance under different parameters, and evaluate its processing speed on different computer architectures.

  11. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  12. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  13. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  14. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization.

    PubMed

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-09-23

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  15. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization

    PubMed Central

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-01-01

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals. PMID:26404314

  16. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.

  17. Comparison between upwind FEM and new algorithm based on the indirect BIEM for 3D moving conductor problems

    SciTech Connect

    Kim, D.H. . Living System Research Lab.); Jeon, D.Y.; Hahn, S.Y. . Dept. of Electrical Engineering)

    1999-05-01

    In general, an electromagnetic apparatus such as linear induction motors, MAGLEV vehicles or electromagnetic launchers, involves conducting parts in motion. This paper presents a new algorithm based on the indirect boundary integral equation method to analyze the electromagnetic system with a moving conductor. The proposed algorithm yields relatively stable and accurate solutions because a fundamental Green's function of diffusion type is used which is valid for any value of the Peclet number. In addition, computer memory and computing time for 3D computation can be saved considerably by using the boundary integral equations of minimum order and the singular property of the Green's function. In order to prove these, numerical results obtained by the proposed algorithm and the upwind finite element method are compared with their analytic solutions.

  18. Computer-aided diagnosis: a 3D segmentation method for lung nodules in CT images by use of a spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Jiahui; Engelmann, Roger; Li, Qiang

    2008-03-01

    Lung nodule segmentation in computed tomography (CT) plays an important role in computer-aided detection, diagnosis, and quantification systems for lung cancer. In this study, we developed a simple but accurate nodule segmentation method in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. We then transformed the VOI into a two-dimensional (2D) image by use of a "spiral-scanning" technique, in which a radial line originating from the center of the VOI spirally scanned the VOI. The voxels scanned by the radial line were arranged sequentially to form a transformed 2D image. Because the surface of a nodule in 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified our segmentation method and enabled us to obtain accurate segmentation results. We employed a dynamic programming technique to delineate the "optimal" outline of a nodule in the 2D image, which was transformed back into the 3D image space to provide the interior of the nodule. The proposed segmentation method was trained on the first and was tested on the second Lung Image Database Consortium (LIDC) datasets. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric. The experimental results on the LIDC database demonstrated that our segmentation method provided relatively robust and accurate segmentation results with mean overlap values of 66% and 64% for the nodules in the first and second LIDC datasets, respectively, and would be useful for the quantification, detection, and diagnosis of lung cancer.

  19. Hyperspectral images lossless compression using the 3D binary EZW algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-02-01

    This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.

  20. Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.

  1. 3D protein structure prediction using Imperialist Competitive algorithm and half sphere exposure prediction.

    PubMed

    Khaji, Erfan; Karami, Masoumeh; Garkani-Nejad, Zahra

    2016-02-21

    Predicting the native structure of proteins based on half-sphere exposure and contact numbers has been studied deeply within recent years. Online predictors of these vectors and secondary structures of amino acids sequences have made it possible to design a function for the folding process. By choosing variant structures and directs for each secondary structure, a random conformation can be generated, and a potential function can then be assigned. Minimizing the potential function utilizing meta-heuristic algorithms is the final step of finding the native structure of a given amino acid sequence. In this work, Imperialist Competitive algorithm was used in order to accelerate the process of minimization. Moreover, we applied an adaptive procedure to apply revolutionary changes. Finally, we considered a more accurate tool for prediction of secondary structure. The results of the computational experiments on standard benchmark show the superiority of the new algorithm over the previous methods with similar potential function.

  2. A direct numerical reconstruction algorithm for the 3D Calderón problem

    NASA Astrophysics Data System (ADS)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2011-04-01

    In three dimensions Calderón's problem was addressed and solved in theory in the 1980s in a series of papers, but only recently the numerical implementation of the algorithm was initiated. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm is in principle direct and addresses the full non-linear problem immediately. In this paper we will outline the theoretical reconstruction method and describe how the method can be implemented numerically. We will give three different implementations, and compare their performance on a numerical phantom.

  3. MO-G-17A-03: MR-Based Cortical Bone Segmentation for PET Attenuation Correction with a Non-UTE 3D Fast GRE Sequence

    SciTech Connect

    Ai, H; Pan, T; Hwang, K

    2014-06-15

    Purpose: To determine the feasibility of identifying cortical bone on MR images with a short-TE 3D fast-GRE sequence for attenuation correction of PET data in PET/MR. Methods: A water-fat-bone phantom was constructed with two pieces of beef shank. MR scans were performed on a 3T MR scanner (GE Discovery™ MR750). A 3D GRE sequence was first employed to measure the level of residual signal in cortical bone (TE{sub 1}/TE{sub 2}/TE{sub 3}=2.2/4.4/6.6ms, TR=20ms, flip angle=25°). For cortical bone segmentation, a 3D fast-GRE sequence (TE/TR=0.7/1.9ms, acquisition voxel size=2.5×2.5×3mm{sup 3}) was implemented along with a 3D Dixon sequence (TE{sub 1}/TE{sub 2}/TR=1.2/2.3/4.0ms, acquisition voxel size=1.25×1.25×3mm{sup 3}) for water/fat imaging. Flip angle (10°), acquisition bandwidth (250kHz), FOV (480×480×144mm{sup 3}) and reconstructed voxel size (0.94×0.94×1.5mm{sup 3}) were kept the same for both sequences. Soft tissue and fat tissue were first segmented on the reconstructed water/fat image. A tissue mask was created by combining the segmented water/fat masks, which was then applied on the fast-GRE image (MRFGRE). A second mask was created to remove the Gibbs artifacts present in regions in close vicinity to the phantom. MRFGRE data was smoothed with a 3D anisotropic diffusion filter for noise reduction, after which cortical bone and air was separated using a threshold determined from the histogram. Results: There is signal in the cortical bone region in the 3D GRE images, indicating the possibility of separating cortical bone and air based on signal intensity from short-TE MR image. The acquisition time for the 3D fast-GRE sequence was 17s, which can be reduced to less than 10s with parallel imaging. The attenuation image created from water-fat-bone segmentation is visually similar compared to reference CT. Conclusion: Cortical bone and air can be separated based on intensity in MR image with a short-TE 3D fast-GRE sequence. Further research is required

  4. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    SciTech Connect

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da

    2013-05-06

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  5. A fast 3D image simulation algorithm of moving target for scanning laser radar

    NASA Astrophysics Data System (ADS)

    Li, Jicheng; Shi, Zhiguang; Chen, Xiao; Chen, Dong

    2014-10-01

    Scanning Laser Radar has been widely used in many military and civil areas. Usually there are relative movements between the target and the radar, so the moving target image modeling and simulation is an important research content in the field of signal processing and system design of scan-imaging laser radar. In order to improve the simulation speed and hold the accuracy of the image simulation simultaneously, a novel fast simulation algorithm is proposed in this paper. Firstly, for moving target or varying scene, an inequation that can judge the intersection relations between the pixel and target bins is obtained by deriving the projection of target motion trajectories on the image plane. Then, by utilizing the time subdivision and approximate treatments, the potential intersection relations of pixel and target bins are determined. Finally, the goal of reducing the number of intersection operations could be achieved by testing all the potential relations and finding which of them is real intersection. To test the method's performance, we perform computer simulations of both the new proposed algorithm and a literature's algorithm for six targets. The simulation results show that the two algorithm yield the same imaging result, whereas the number of intersection operations of former is equivalent to only 1% of the latter, and the calculation efficiency increases a hundredfold. The novel simulation acceleration idea can be applied extensively in other more complex application environments and provide equally acceleration effect. It is very suitable for the case to produce a great large number of laser radar images.

  6. 4D BADA-based Trajectory Generator and 3D Guidance Algorithm

    NASA Technical Reports Server (NTRS)

    Palacios, Eduardo Sepulveda; Johnson, Marcus A.

    2013-01-01

    This paper presents a hybrid integration between aerodynamic, airline procedures and other BADA-based (Base of Aircraft Data) coefficients with a classical aircraft dynamic model. This paper also describes a three-dimensional guidance algorithm implemented in order to produce commands for the aircraft to follow a flight plan. The software chosen for this work is MATLAB.

  7. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics

    NASA Astrophysics Data System (ADS)

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-01

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the

  8. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  9. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2016-10-05

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys". In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with

  10. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson

  11. An Efficient Algorithm for Mapping Imaging Data to 3D Unstructured Grids in Computational Biomechanics

    SciTech Connect

    Einstein, Daniel R.; Kuprat, Andrew P.; Jiao, Xiangmin; Carson, James P.; Einstein, David M.; Corley, Richard A.; Jacob, Rick E.

    2013-01-01

    Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: 1) the mapping of MRI diffusion tensor data to an unstuctured ventricular grid; 2) the mapping of serial cyro-section histology data to an unstructured mouse brain grid; and 3) the mapping of CT-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case.

  12. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  13. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  14. A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations

    NASA Astrophysics Data System (ADS)

    Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  15. Grid-free 3D multiple spot generation with an efficient single-plane FFT-based algorithm.

    PubMed

    Engström, David; Frank, Anders; Backsten, Jan; Goksör, Mattias; Bengtsson, Jörgen

    2009-06-08

    Algorithms based on the fast Fourier transform (FFT) for the design of spot-generating computer generated holograms (CGHs) typically only make use of a few sample positions in the propagated field. We have developed a new design method that much better utilizes the information-carrying capacity of the sampled propagated field. In this way design tasks which are difficult to accomplish with conventional FFT-based design methods, such as spot positioning at non-sample positions and/or spot positioning in 3D, are solved as easily as any standard design task using a conventional method. The new design method is based on a projection optimization, similar to that in the commonly used Gerchberg-Saxton algorithm, and the vastly improved design freedom comes at virtually no extra computational cost compared to the conventional design. Several different design tasks were demonstrated experimentally with a liquid crystal spatial light modulator, showing highly accurate creation of the desired field distributions.

  16. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  17. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.

  18. Computer-aided mesenteric small vessel