Science.gov

Sample records for anatomy segmentation algorithm

  1. An artifact-robust, shape library-based algorithm for automatic segmentation of inner ear anatomy in post-cochlear-implantation CT.

    PubMed

    Reda, Fitsum A; Noble, Jack H; Labadie, Robert F; Dawant, Benoit M

    2014-03-21

    A cochlear implant (CI) is a device that restores hearing using an electrode array that is surgically placed in the cochlea. After implantation, the CI is programmed to attempt to optimize hearing outcome. Currently, we are testing an image-guided CI programming (IGCIP) technique we recently developed that relies on knowledge of relative position of intracochlear anatomy to implanted electrodes. IGCIP is enabled by a number of algorithms we developed that permit determining the positions of electrodes relative to intra-cochlear anatomy using a pre- and a post-implantation CT. One issue with this technique is that it cannot be used for many subjects for whom a pre-implantation CT was not acquired. Pre-implantation CT has been necessary because it is difficult to localize the intra-cochlear structures in post-implantation CTs alone due to the image artifacts that obscure the cochlea. In this work, we present an algorithm for automatically segmenting intra-cochlear anatomy in post-implantation CTs. Our approach is to first identify the labyrinth and then use its position as a landmark to localize the intra-cochlea anatomy. Specifically, we identify the labyrinth by first approximately estimating its position by mapping a labyrinth surface of another subject that is selected from a library of such surfaces and then refining this estimate by a standard shape model-based segmentation method. We tested our approach on 10 ears and achieved overall mean and maximum errors of 0.209 and 0.98 mm, respectively. This result suggests that our approach is accurate enough for developing IGCIP strategies based solely on post-implantation CTs.

  2. An artifact-robust, shape library-based algorithm for automatic segmentation of inner ear anatomy in post-cochlear-implantation CT

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Noble, Jack H.; Labadie, Robert F.; Dawant, Benoit M.

    2014-03-01

    A cochlear implant (CI) is a device that restores hearing using an electrode array that is surgically placed in the cochlea. After implantation, the CI is programmed to attempt to optimize hearing outcome. Currently, we are testing an imageguided CI programming (IGCIP) technique we recently developed that relies on knowledge of relative position of intracochlear anatomy to implanted electrodes. IGCIP is enabled by a number of algorithms we developed that permit determining the positions of electrodes relative to intra-cochlear anatomy using a pre- and a post-implantation CT. One issue with this technique is that it cannot be used for many subjects for whom a pre-implantation CT was not acquired. Pre-implantation CT has been necessary because it is difficult to localize the intra-cochlear structures in post-implantation CTs alone due to the image artifacts that obscure the cochlea. In this work, we present an algorithm for automatically segmenting intra-cochlear anatomy in post-implantation CTs. Our approach is to first identify the labyrinth and then use its position as a landmark to localize the intra-cochlea anatomy. Specifically, we identify the labyrinth by first approximately estimating its position by mapping a labyrinth surface of another subject that is selected from a library of such surfaces and then refining this estimate by a standard shape model-based segmentation method. We tested our approach on 10 ears and achieved overall mean and maximum errors of 0.209 and 0.98 mm, respectively. This result suggests that our approach is accurate enough for developing IGCIP strategies based solely on post-implantation CTs.

  3. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  4. Performance evaluation of an automatic anatomy segmentation algorithm on repeat or four-dimensional CT images using a deformable image registration method

    PubMed Central

    Wang, He; Garden, Adam S.; Zhang, Lifei; Wei, Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O’Daniel, Jennifer; Zhang, Yongbin; Mohan, Radhe; Dong, Lei

    2008-01-01

    Purpose Auto-propagation of anatomical region-of-interests (ROIs) from the planning CT to daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Method and Materials We previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In this study, the ROIs delineated on the planning CT image were mapped onto daily CT or four-dimentional (4D) CT images using the same transformation. Post-processing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for eight head-and-neck patients with a total of 100 repeat CTs, one prostate patient with 24 repeat CTs, and nine lung cancer patients with a total of 90 4D-CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume-overlap-index (VOI) and mean absolute surface-to-surface distance (ASSD). Results The deformed contours were reasonably well matched with daily anatomy on repeat CT images. The VOI and mean ASSD were 83% and 1.3 mm when compared to the independently drawn contours. A better agreement (greater than 97% and less than 0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was robust in the presence of random noise in the image. Conclusion The deformable algorithm may be an effective method to propagate the planning ROIs to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended. PMID:18722272

  5. Performance Evaluation of Automatic Anatomy Segmentation Algorithm on Repeat or Four-Dimensional Computed Tomography Images Using Deformable Image Registration Method

    SciTech Connect

    Wang He; Garden, Adam S.; Zhang Lifei; Wei Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O'Daniel, Jennifer; Zhang Yongbin; Mohan, Radhe; Dong Lei

    2008-09-01

    Purpose: Auto-propagation of anatomic regions of interest from the planning computed tomography (CT) scan to the daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Methods and Materials: We had previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In the present study, the regions of interest delineated on the planning CT image were mapped onto daily CT or four-dimensional CT images using the same transformation. Postprocessing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for 8 head-and-neck cancer patients with a total of 100 repeat CT scans, 1 prostate patient with 24 repeat CT scans, and 9 lung cancer patients with a total of 90 four-dimensional CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume overlap index and mean absolute surface-to-surface distance. Results: The deformed contours were reasonably well matched with the daily anatomy on the repeat CT images. The volume overlap index and mean absolute surface-to-surface distance was 83% and 1.3 mm, respectively, compared with the independently drawn contours. Better agreement (>97% and <0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was also robust in the presence of random noise in the image. Conclusion: The deformable algorithm might be an effective method to propagate the planning regions of interest to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended.

  6. Automatic segmentation of intra-cochlear anatomy in post-implantation CT

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Dawant, Benoit M.; McRackan, Theodore R.; Labadie, Robert F.; Noble, Jack H.

    2013-03-01

    A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve with an electrode array. In CI surgery, the surgeon threads the electrode array into the cochlea, blind to internal structures. We have recently developed algorithms for determining the position of CI electrodes relative to intra-cochlear anatomy using pre- and post-implantation CT. We are currently using this approach to develop a CI programming assistance system that uses knowledge of electrode position to determine a patient-customized CI sound processing strategy. However, this approach cannot be used for the majority of CI users because the cochlea is obscured by image artifacts produced by CI electrodes and acquisition of pre-implantation CT is not universal. In this study we propose an approach that extends our techniques so that intra-cochlear anatomy can be segmented for CI users for which pre-implantation CT was not acquired. The approach achieves automatic segmentation of intra-cochlear anatomy in post-implantation CT by exploiting intra-subject symmetry in cochlear anatomy across ears. We validated our approach on a dataset of 10 ears in which both pre- and post-implantation CTs were available. Our approach results in mean and maximum segmentation errors of 0.27 and 0.62 mm, respectively. This result suggests that our automatic segmentation approach is accurate enough for developing customized CI sound processing strategies for unilateral CI patients based solely on postimplantation CT scans.

  7. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    PubMed

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  8. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  9. Spectral clustering algorithms for ultrasound image segmentation.

    PubMed

    Archip, Neculai; Rohling, Robert; Cooperberg, Peter; Tahmasebpour, Hamid; Warfield, Simon K

    2005-01-01

    Image segmentation algorithms derived from spectral clustering analysis rely on the eigenvectors of the Laplacian of a weighted graph obtained from the image. The NCut criterion was previously used for image segmentation in supervised manner. We derive a new strategy for unsupervised image segmentation. This article describes an initial investigation to determine the suitability of such segmentation techniques for ultrasound images. The extension of the NCut technique to the unsupervised clustering is first described. The novel segmentation algorithm is then performed on simulated ultrasound images. Tests are also performed on abdominal and fetal images with the segmentation results compared to manual segmentation. Comparisons with the classical NCut algorithm are also presented. Finally, segmentation results on other types of medical images are shown.

  10. Robust Atlas-Based Segmentation of Highly Variable Anatomy: Left Atrium Segmentation.

    PubMed

    Depa, Michal; Sabuncu, Mert R; Holmvang, Godtfred; Nezafat, Reza; Schmidt, Ehud J; Golland, Polina

    Automatic segmentation of the heart's left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. However, the high anatomical variability of the left atrium presents significant challenges for atlas-guided segmentation. In this paper, we demonstrate an automatic method for left atrium segmentation using weighted voting label fusion and a variant of the demons registration algorithm adapted to handle images with different intensity distributions. We achieve accurate automatic segmentation that is robust to the high anatomical variations in the shape of the left atrium in a clinical dataset of MRA images.

  11. A segmentation algorithm for noisy images

    SciTech Connect

    Xu, Y.; Olman, V.; Uberbacher, E.C.

    1996-12-31

    This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.

  12. Heart region segmentation from low-dose CT scans: an anatomy based approach

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Biancardi, Alberto M.; Yankelevitz, David F.; Cham, Matthew D.; Henschke, Claudia I.

    2012-02-01

    Cardiovascular disease is a leading cause of death in developed countries. The concurrent detection of heart diseases during low-dose whole-lung CT scans (LDCT), typically performed as part of a screening protocol, hinges on the accurate quantification of coronary calcification. The creation of fully automated methods is ideal as complete manual evaluation is imprecise, operator dependent, time consuming and thus costly. The technical challenges posed by LDCT scans in this context are mainly twofold. First, there is a high level image noise arising from the low radiation dose technique. Additionally, there is a variable amount of cardiac motion blurring due to the lack of electrocardiographic gating and the fact that heart rates differ between human subjects. As a consequence, the reliable segmentation of the heart, the first stage toward the implementation of morphologic heart abnormality detection, is also quite challenging. An automated computer method based on a sequential labeling of major organs and determination of anatomical landmarks has been evaluated on a public database of LDCT images. The novel algorithm builds from a robust segmentation of the bones and airways and embodies a stepwise refinement starting at the top of the lungs where image noise is at its lowest and where the carina provides a good calibration landmark. The segmentation is completed at the inferior wall of the heart where extensive image noise is accommodated. This method is based on the geometry of human anatomy and does not involve training through manual markings. Using visual inspection by an expert reader as a gold standard, the algorithm achieved successful heart and major vessel segmentation in 42 of 45 low-dose CT images. In the 3 remaining cases, the cardiac base was over segmented due to incorrect hemidiaphragm localization.

  13. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  14. A Survey of Digital Image Segmentation Algorithms

    DTIC Science & Technology

    1995-01-01

    features. Thresholding techniques arc also useful in segmenting such binary images as printed documents, line drawings, and multispectral and x-ray...algorithms, pixel labeling and run-length connectivity analysis, arc discussed in the following sections. Therefore, in exammmg g(x, y), pixels that are...edge linking, graph searching, curve fitting, Hough transform, and others arc applicablc to image segmematio~. Difficulties with boundary-based methods

  15. Segmentation precision of abdominal anatomy for MRI-based radiotherapy.

    PubMed

    Noel, Camille E; Zhu, Fan; Lee, Andrew Y; Yanle, Hu; Parikh, Parag J

    2014-01-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC(intraobserver) = 0.89 ± 0.12, HD(intraobserver) = 3.6mm ± 1.5, DC(interobserver) = 0.89 ± 0.15, and HD(interobserver) = 3.2mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.

  16. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    SciTech Connect

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.; Yanle, Hu; Parikh, Parag J.

    2014-10-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC{sub intraobserver} = 0.89 ± 0.12, HD{sub intraobserver} = 3.6 mm ± 1.5, DC{sub interobserver} = 0.89 ± 0.15, and HD{sub interobserver} = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.

  17. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth

    PubMed Central

    A., Javadpour; A., Mohammadi

    2016-01-01

    Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629

  18. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  19. New segmentation algorithm for detecting tiny objects

    NASA Astrophysics Data System (ADS)

    Sun, Han; Yang, Jingyu; Ren, Mingwu; Gao, Jian-zhen

    2001-09-01

    Road cracks in the highway surface are very dangerous to traffic. They should be found and repaired as early as possible. So we designed the system of auto detecting cracks in the highway surface. In this system, there are several key steps. For instance, the first step, image recording should use high quality photography device because of the high speed. In addition, the original data is very large, so it needs huge storage media and some effective compress processing. As the illumination is affected by environment greatly, it is essential to do some preprocessing first, such as image reconstruction and enhancement. Because the cracks are too tiny to detect, segmentation is rather difficult. This paper here proposed a new segmentation method to detect such tiny cracks, even 2mm-width ones. In this algorithm, we first do edge detecting to get seeds for line growing in the following. Then delete the false ones and get the information of cracks. It is accurate and fast enough.

  20. CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.

    PubMed

    Barone, S; Paoli, A; Razionale, A V

    2016-06-01

    Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Automatic lobar segmentation for diseased lungs using an anatomy-based priority knowledge in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang Joon; Kim, Jung Im; Goo, Jin Mo; Lee, Doohee

    2014-03-01

    Lung lobar segmentation in CT images is a challenging tasks because of the limitations in image quality inherent to CT image acquisition, especially low-dose CT for clinical routine environment. Besides, complex anatomy and abnormal lesions in the lung parenchyma makes segmentation difficult because contrast in CT images are determined by the differential absorption of X-rays by neighboring structures, such as tissue, vessel or several pathological conditions. Thus, we attempted to develop a robust segmentation technique for normal and diseased lung parenchyma. The images were obtained with low-dose chest CT using soft reconstruction kernel (Sensation 16, Siemens, Germany). Our PC-based in-house software segmented bronchial trees and lungs with intensity adaptive region-growing technique. Then the horizontal and oblique fissures were detected by using eigenvalues-ratio of the Hessian matrix in the lung regions which were excluded from airways and vessels. To enhance and recover the faithful 3-D fissure plane, our proposed fissure enhancing scheme were applied to the images. After finishing above steps, for careful smoothening of fissure planes, 3-D rolling-ball algorithm in xyz planes were performed. Results show that success rate of our proposed scheme was achieved up to 89.5% in the diseased lung parenchyma.

  2. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  3. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  4. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  5. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  6. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  7. The PCNN adaptive segmentation algorithm based on visual perception

    NASA Astrophysics Data System (ADS)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  8. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  9. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  10. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  11. A region growing vessel segmentation algorithm based on spectrum information.

    PubMed

    Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo

    2013-01-01

    We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.

  12. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  13. Robust and accurate star segmentation algorithm based on morphology

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Lei, Liu; Guangjun, Zhang

    2016-06-01

    Star tracker is an important instrument of measuring a spacecraft's attitude; it measures a spacecraft's attitude by matching the stars captured by a camera and those stored in a star database, the directions of which are known. Attitude accuracy of star tracker is mainly determined by star centroiding accuracy, which is guaranteed by complete star segmentation. Current algorithms of star segmentation cannot suppress different interferences in star images and cannot segment stars completely because of these interferences. To solve this problem, a new star target segmentation algorithm is proposed on the basis of mathematical morphology. The proposed algorithm utilizes the margin structuring element to detect small targets and the opening operation to suppress noises, and a modified top-hat transform is defined to extract stars. A combination of three different structuring elements is utilized to define a new star segmentation algorithm, and the influence of three different structural elements on the star segmentation results is analyzed. Experimental results show that the proposed algorithm can suppress different interferences and segment stars completely, thus providing high star centroiding accuracy.

  14. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  15. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    NASA Astrophysics Data System (ADS)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  16. Segmentation of thermographic images of hands using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Payel; Mitchell, Melanie; Gold, Judith

    2010-01-01

    This paper presents a new technique for segmenting thermographic images using a genetic algorithm (GA). The individuals of the GA also known as chromosomes consist of a sequence of parameters of a level set function. Each chromosome represents a unique segmenting contour. An initial population of segmenting contours is generated based on the learned variation of the level set parameters from training images. Each segmenting contour (an individual) is evaluated for its fitness based on the texture of the region it encloses. The fittest individuals are allowed to propagate to future generations of the GA run using selection, crossover and mutation. The dataset consists of thermographic images of hands of patients suffering from upper extremity musculo-skeletal disorders (UEMSD). Thermographic images are acquired to study the skin temperature as a surrogate for the amount of blood flow in the hands of these patients. Since entire hands are not visible on these images, segmentation of the outline of the hands on these images is typically performed by a human. In this paper several different methods have been tried for segmenting thermographic images: Gabor-wavelet-based texture segmentation method, the level set method of segmentation and our GA which we termed LSGA because it combines level sets with genetic algorithms. The results show a comparative evaluation of the segmentation performed by all the methods. We conclude that LSGA successfully segments entire hands on images in which hands are only partially visible.

  17. Segmentation algorithms for ear image data towards biomechanical studies.

    PubMed

    Ferreira, Ana; Gentil, Fernanda; Tavares, João Manuel R S

    2014-01-01

    In recent years, the segmentation, i.e. the identification, of ear structures in video-otoscopy, computerised tomography (CT) and magnetic resonance (MR) image data, has gained significant importance in the medical imaging area, particularly those in CT and MR imaging. Segmentation is the fundamental step of any automated technique for supporting the medical diagnosis and, in particular, in biomechanics studies, for building realistic geometric models of ear structures. In this paper, a review of the algorithms used in ear segmentation is presented. The review includes an introduction to the usually biomechanical modelling approaches and also to the common imaging modalities. Afterwards, several segmentation algorithms for ear image data are described, and their specificities and difficulties as well as their advantages and disadvantages are identified and analysed using experimental examples. Finally, the conclusions are presented as well as a discussion about possible trends for future research concerning the ear segmentation.

  18. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  19. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  20. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.

  1. Modeling and segmentation of intra-cochlear anatomy in conventional CT

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Rutherford, Robert B.; Labadie, Robert F.; Majdani, Omid; Dawant, Benoit M.

    2010-03-01

    Cochlear implant surgery is a procedure performed to treat profound hearing loss. Since the cochlea is not visible in surgery, the physician uses anatomical landmarks to estimate the pose of the cochlea. Research has indicated that implanting the electrode in a particular cavity of the cochlea, the scala tympani, results in better hearing restoration. The success of the scala tympani implantation is largely dependent on the point of entry and angle of electrode insertion. Errors can occur due to the imprecise nature of landmark-based, manual navigation as well as inter-patient variations between scala tympani and the anatomical landmarks. In this work, we use point distribution models of the intra-cochlear anatomy to study the inter-patient variations between the cochlea and the typical anatomic landmarks, and we implement an active shape model technique to automatically localize intra-cochlear anatomy in conventional CT images, where intra-cochlear structures are not visible. This fully automatic segmentation could aid the surgeon to choose the point of entry and angle of approach to maximize the likelihood of scala tympani insertion, resulting in more substantial hearing restoration.

  2. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  3. Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.

  4. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  5. a Review of Point Clouds Segmentation and Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Menna, F.; Remondino, F.

    2017-02-01

    Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

  6. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  7. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  8. Magnetic resonance segmentation with the bubble wave algorithm

    NASA Astrophysics Data System (ADS)

    Cline, Harvey E.; Ludke, Siegwalt

    2003-05-01

    A new bubble wave algorithm provides automatic segmentation of three-dimensional magnetic resonance images of both the peripheral vasculature and the brain. Simple connectivity algorithms are not reliable in these medical applications because there are unwanted connections through background noise. The bubble wave algorithm restricts connectivity using curvature by testing spherical regions on a propagating active contour to eliminate noise bridges. After the user places seeds in both the selected regions and in the regions that are not desired, the method provides the critical threshold for segmentation using binary search. Today, peripheral vascular disease is diagnosed using magnetic resonance imaging with a timed contrast bolus. A new blood pool contrast agent MS-325 (Epix Medical) binds to albumen in the blood and provides high-resolution three-dimensional images of both arteries and veins. The bubble wave algorithm provides a means to automatically suppress the veins that obscure the arteries in magnetic resonance angiography. Monitoring brain atrophy is needed for trials of drugs that retard the progression of dementia. The brain volume is measured by placing seeds in both the brain and scalp to find the critical threshold that prevents connections between the brain volume and the scalp. Examples from both three-dimensional magnetic resonance brain and contrast enhanced vascular images were segmented with minimal user intervention.

  9. Joint graph cut and relative fuzzy connectedness image segmentation algorithm.

    PubMed

    Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K

    2013-12-01

    We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC.

  10. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  11. Facial Skin Segmentation Using Bacterial Foraging Optimization Algorithm

    PubMed Central

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-01-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%). PMID:23724370

  12. Facial skin segmentation using bacterial foraging optimization algorithm.

    PubMed

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-10-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%).

  13. Accurate colon residue detection algorithm with partial volume segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

    2004-05-01

    Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

  14. Aberrant Lower Extremity Arterial Anatomy in Microvascular Free Fibula Flap Candidates: Management Algorithm and Case Presentations.

    PubMed

    Golas, Alyssa R; Levine, Jamie P; Ream, Justin; Rodriguez, Eduardo D

    2016-10-14

    An accurate and comprehensive understanding of lower extremity arterial anatomy is essential for the successful harvest and transfer of a free fibula osteoseptocutaneous flap (FFF). Minimum preoperative evaluation includes detailed history and physical including lower extremity pulse examination. Controversy exists regarding whether preoperative angiographic imaging should be performed for all patients. Elevation of an FFF necessitates division of the peroneal artery in the proximal lower leg and eradicates its downstream flow. For patients in whom the peroneal artery comprises the dominant arterial supply to the foot, FFF elevation is contraindicated. Detailed preoperative knowledge of patient-specific lower extremity arterial anatomy can help to avoid ischemia or limb loss resulting from FFF harvest. If preoperative angiographic imaging is omitted, careful attention must be paid to intraoperative anatomy. Should pedal perfusion rely on the peroneal artery, reconstructive options other than an FFF must be pursued. Given the complexity of surgical decision making, the authors propose an algorithm to guide the surgeon from the preoperative evaluation of the potential free fibula flap patient to the final execution of the surgical plan. The authors also provide 3 clinical patients in whom aberrant lower extremity anatomy was encountered and describe each patient's surgical course.

  15. Aberrant Lower Extremity Arterial Anatomy in Microvascular Free Fibula Flap Candidates: Management Algorithm and Case Presentations.

    PubMed

    Golas, Alyssa R; Levine, Jamie P; Ream, Justin; Rodriguez, Eduardo D

    2016-11-01

    An accurate and comprehensive understanding of lower extremity arterial anatomy is essential for the successful harvest and transfer of a free fibula osteoseptocutaneous flap (FFF). Minimum preoperative evaluation includes detailed history and physical including lower extremity pulse examination. Controversy exists regarding whether preoperative angiographic imaging should be performed for all patients. Elevation of an FFF necessitates division of the peroneal artery in the proximal lower leg and eradicates its downstream flow. For patients in whom the peroneal artery comprises the dominant arterial supply to the foot, FFF elevation is contraindicated. Detailed preoperative knowledge of patient-specific lower extremity arterial anatomy can help to avoid ischemia or limb loss resulting from FFF harvest. If preoperative angiographic imaging is omitted, careful attention must be paid to intraoperative anatomy. Should pedal perfusion rely on the peroneal artery, reconstructive options other than an FFF must be pursued. Given the complexity of surgical decision making, the authors propose an algorithm to guide the surgeon from the preoperative evaluation of the potential free fibula flap patient to the final execution of the surgical plan. The authors also provide 3 clinical patients in whom aberrant lower extremity anatomy was encountered and describe each patient's surgical course.

  16. Sampling protein conformations using segment libraries and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gunn, John R.

    1997-03-01

    We present a new simulation algorithm for minimizing empirical contact potentials for a simplified model of protein structure. The model consists of backbone atoms only (including Cβ) with the φ and ψ dihedral angles as the only degrees of freedom. In addition, φ and ψ are restricted to a finite set of 532 discrete pairs of values, and the secondary structural elements are held fixed in ideal geometries. The potential function consists of a look-up table based on discretized inter-residue atomic distances. The minimization consists of two principal elements: the use of preselected lists of trial moves and the use of a genetic algorithm. The trial moves consist of substitutions of one or two complete loop regions, and the lists are in turn built up using preselected lists of randomly-generated three-residue segments. The genetic algorithm consists of mutation steps (namely, the loop replacements), as well as a hybridization step in which new structures are created by combining parts of two "parents'' and a selection step in which hybrid structures are introduced into the population. These methods are combined into a Monte Carlo simulated annealing algorithm which has the overall structure of a random walk on a restricted set of preselected conformations. The algorithm is tested using two types of simple model potential. The first uses global information derived from the radius of gyration and the rms deviation to drive the folding, whereas the second is based exclusively on distance-geometry constraints. The hierarchical algorithm significantly outperforms conventional Monte Carlo simulation for a set of test proteins in both cases, with the greatest advantage being for the largest molecule having 193 residues. When tested on a realistic potential function, the method consistently generates structures ranked lower than the crystal structure. The results also show that the improved efficiency of the hierarchical algorithm exceeds that which would be anticipated

  17. Guaranteeing Convergence of Iterative Skewed Voting Algorithms for Image Segmentation

    PubMed Central

    Balcan, Doru C.; Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2012-01-01

    In this paper we provide rigorous proof for the convergence of an iterative voting-based image segmentation algorithm called Active Masks. Active Masks (AM) was proposed to solve the challenging task of delineating punctate patterns of cells from fluorescence microscope images. Each iteration of AM consists of a linear convolution composed with a nonlinear thresholding; what makes this process special in our case is the presence of additive terms whose role is to “skew” the voting when prior information is available. In real-world implementation, the AM algorithm always converges to a fixed point. We study the behavior of AM rigorously and present a proof of this convergence. The key idea is to formulate AM as a generalized (parallel) majority cellular automaton, adapting proof techniques from discrete dynamical systems. PMID:22984338

  18. Crowdsourcing the creation of image segmentation algorithms for connectomics

    PubMed Central

    Arganda-Carreras, Ignacio; Turaga, Srinivas C.; Berger, Daniel R.; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M.; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M.; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D.; Bas, Erhan; Uzunbas, Mustafa G.; Cardona, Albert; Schindelin, Johannes; Seung, H. Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge. PMID:26594156

  19. Crowdsourcing the creation of image segmentation algorithms for connectomics.

    PubMed

    Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.

  20. Sinus Anatomy

    MedlinePlus

    ... ARS HOME ANATOMY Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ... ANATOMY > Sinus Anatomy Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ...

  1. Nasal Anatomy

    MedlinePlus

    ... ARS HOME ANATOMY Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ... ANATOMY > Nasal Anatomy Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ...

  2. Bladder segmentation in MR images with watershed segmentation and graph cut algorithm

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Renisch, Steffen; Schadewaldt, Nicole; Schulz, Heinrich; Wiemker, Rafael

    2014-03-01

    Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.

  3. Evaluation of synthetic aperture radar image segmentation algorithms in the context of automatic target recognition

    NASA Astrophysics Data System (ADS)

    Xue, Kefu; Power, Gregory J.; Gregga, Jason B.

    2002-11-01

    Image segmentation is a process to extract and organize information energy in the image pixel space according to a prescribed feature set. It is often a key preprocess in automatic target recognition (ATR) algorithms. In many cases, the performance of image segmentation algorithms will have significant impact on the performance of ATR algorithms. Due to the variations in feature set definitions and the innovations in the segmentation processes, there is large number of image segmentation algorithms existing in ATR world. Recently, the authors have investigated a number of measures to evaluate the performance of segmentation algorithms, such as Percentage Pixels Same (pps), Partial Directed Hausdorff (pdh) and Complex Inner Product (cip). In the research, we found that the combination of the three measures shows effectiveness in the evaluation of segmentation algorithms against truth data (human master segmentation). However, we still don't know what are the impact of those measures in the performance of ATR algorithms that are commonly measured by Probability of detection (PDet), Probability of false alarm (PFA), Probability of identification (PID), etc. In all practical situations, ATR boxes are implemented without human observer in the loop. The performance of synthetic aperture radar (SAR) image segmentation should be evaluated in the context of ATR rather than human observers. This research establishes a segmentation algorithm evaluation suite involving segmentation algorithm performance measures as well as the ATR algorithm performance measures. It provides a practical quantitative evaluation method to judge which SAR image segmentation algorithm is the best for a particular ATR application. The results are tabulated based on some baseline ATR algorithms and a typical image segmentation algorithm used in ATR applications.

  4. A wavelet relational fuzzy C-means algorithm for 2D gel image segmentation.

    PubMed

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A B

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation.

  5. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  6. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  7. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  8. An improved FSL-FIRST pipeline for subcortical gray matter segmentation to study abnormal brain anatomy using quantitative susceptibility mapping (QSM).

    PubMed

    Feng, Xiang; Deistung, Andreas; Dwyer, Michael G; Hagemeier, Jesper; Polak, Paul; Lebenberg, Jessica; Frouin, Frédérique; Zivadinov, Robert; Reichenbach, Jürgen R; Schweser, Ferdinand

    2017-02-07

    Accurate and robust segmentation of subcortical gray matter (SGM) nuclei is required in many neuroimaging applications. FMRIB's Integrated Registration and Segmentation Tool (FIRST) is one of the most popular software tools for automated subcortical segmentation based on T1-weighted (T1w) images. In this work, we demonstrate that FIRST tends to produce inaccurate SGM segmentation results in the case of abnormal brain anatomy, such as present in atrophied brains, due to a poor spatial match of the subcortical structures with the training data in the MNI space as well as due to insufficient contrast of SGM structures on T1w images. Consequently, such deviations from the average brain anatomy may introduce analysis bias in clinical studies, which may not always be obvious and potentially remain unidentified. To improve the segmentation of subcortical nuclei, we propose to use FIRST in combination with a special Hybrid image Contrast (HC) and Non-Linear (nl) registration module (HC-nlFIRST), where the hybrid image contrast is derived from T1w images and magnetic susceptibility maps to create subcortical contrast that is similar to that in the Montreal Neurological Institute (MNI) template. In our approach, a nonlinear registration replaces FIRST's default linear registration, yielding a more accurate alignment of the input data to the MNI template. We evaluated our method on 82 subjects with particularly abnormal brain anatomy, selected from a database of >2000 clinical cases. Qualitative and quantitative analyses revealed that HC-nlFIRST provides improved segmentation compared to the default FIRST method.

  9. Anatomy of the ostia venae hepaticae and the retrohepatic segment of the inferior vena cava.

    PubMed Central

    Camargo, A M; Teixeira, G G; Ortale, J R

    1996-01-01

    In 30 normal adult livers the retrohepatic segment of inferior vena cava had a length of 6.7 cm and was totally encircled by liver substance in 30% of cases. Altogether 442 ostia venae hepaticae were found, averaging 14.7 per liver and classified as large, medium, small and minimum. The localisation of the openings was studied according to the division of the wall of the retrohepatic segment of the inferior vena cava into 16 areas. PMID:8655416

  10. Wound size measurement of lower extremity ulcers using segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.

  11. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  12. Automated segmentation of tumors on bone scans using anatomy-specific thresholding

    NASA Astrophysics Data System (ADS)

    Chu, Gregory H.; Lo, Pechin; Kim, Hyun J.; Lu, Peiyun; Ramakrishna, Bharath; Gjertson, David; Poon, Cheryce; Auerbach, Martin; Goldin, Jonathan; Brown, Matthew S.

    2012-03-01

    Quantification of overall tumor area on bone scans may be a potential biomarker for treatment response assessment and has, to date, not been investigated. Segmentation of bone metastases on bone scans is a fundamental step for this response marker. In this paper, we propose a fully automated computerized method for the segmentation of bone metastases on bone scans, taking into account characteristics of different anatomic regions. A scan is first segmented into anatomic regions via an atlas-based segmentation procedure, which involves non-rigidly registering a labeled atlas scan to the patient scan. Next, an intensity normalization method is applied to account for varying levels of radiotracer dosing levels and scan timing. Lastly, lesions are segmented via anatomic regionspecific intensity thresholding. Thresholds are chosen by receiver operating characteristic (ROC) curve analysis against manual contouring by board certified nuclear medicine physicians. A leave-one-out cross validation of our method on a set of 39 bone scans with metastases marked by 2 board-certified nuclear medicine physicians yielded a median sensitivity of 95.5%, and specificity of 93.9%. Our method was compared with a global intensity thresholding method. The results show a comparable sensitivity and significantly improved overall specificity, with a p-value of 0.0069.

  13. Learning Likelihoods for Labeling (L3): A General Multi-Classifier Segmentation Algorithm

    PubMed Central

    Weisenfeld, Neil I.; Warfield, Simon K.

    2013-01-01

    PURPOSE To develop an MRI segmentation method for brain tissues, regions, and substructures that yields improved classification accuracy. Current brain segmentation strategies include two complementary strategies: multi-spectral classification and multi-template label fusion with individual strengths and weaknesses. METHODS We propose here a novel multi-classifier fusion algorithm with the advantages of both types of segmentation strategy. We illustrate and validate this algorithm using a group of 14 expertly hand-labeled images. RESULTS Our method generated segmentations of cortical and subcortical structures that were more similar to hand-drawn segmentations than majority vote label fusion or a recently published intensity/label fusion method. CONCLUSIONS We have presented a novel, general segmentation algorithm with the advantages of both statistical classifiers and label fusion techniques. PMID:22003715

  14. Interactive algorithms for the segmentation and quantitation of 3-D MRI brain scans.

    PubMed

    Freeborough, P A; Fox, N C; Kitney, R I

    1997-05-01

    Interactive algorithms are an attractive approach to the accurate segmentation of 3D brain scans as they potentially improve the reliability of fully automated segmentation while avoiding the labour intensiveness and inaccuracies of manual segmentation. We present a 3D image analysis package (MIDAS) with a novel architecture enabling highly interactive segmentation algorithms to be implemented as add on modules. Interactive methods based on intensity thresholding, region growing and the constrained application of morphological operators are also presented. The methods involve the application of constraints and freedoms on the algorithms coupled with real time visualisation of the effect. This methodology has been applied to the segmentation, visualisation and measurement of the whole brain and a small irregular neuroanatomical structure, the hippocampus. We demonstrate reproducible and anatomically accurate segmentations of these structures. The efficacy of one method in measuring volume loss (atrophy) of the hippocampus in Alzheimer's disease is shown and is compared to conventional methods.

  15. A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights

    PubMed Central

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E.; Warfield, Simon K.

    2014-01-01

    Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison

  16. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    SciTech Connect

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-11-15

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID{sup 2}PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID{sup 2}PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions.

  17. Skin cells segmentation algorithm based on spectral angle and distance score

    NASA Astrophysics Data System (ADS)

    Li, Qingli; Chang, Li; Liu, Hongying; Zhou, Mei; Wang, Yiting; Guo, Fangmin

    2015-11-01

    In the diagnosis of skin diseases by analyzing histopathological images of skin sections, the automated segmentation of cells in the epidermis area is an important step. Light microscopy based traditional methods usually cannot generate satisfying segmentation results due to complicated skin structures and limited information of this kind of image. In this study, we use a molecular hyperspectral imaging system to observe skin sections and propose a spectral based algorithm to segment epithelial cells. Unlike pixel-wise segmentation methods, the proposed algorithm considers both the spectral angle and the distance score between the test and the reference spectrum for segmentation. The experimental results indicate that the proposed algorithm performs better than the K-Means, fuzzy C-means, and spectral angle mapper algorithms because it can identify pixels with similar spectral angle but a different spectral distance.

  18. Refinement-cut: user-guided segmentation algorithm for translational science.

    PubMed

    Egger, Jan

    2014-06-04

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  19. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    PubMed Central

    Egger, Jan

    2014-01-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650

  20. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    NASA Astrophysics Data System (ADS)

    Egger, Jan

    2014-06-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  1. Terminal Segment Surgical Anatomy of the Rat Facial Nerve: Implications for Facial Reanimation Study

    PubMed Central

    Henstrom, Doug; Hadlock, Tessa; Lindsay, Robin; Knox, Christopher J.; Malo, Juan; Vakharia, Kalpesh T.; Heaton, James T.

    2015-01-01

    Introduction Rodent whisking behavior is supported by the buccal and mandibular branches of the facial nerve, a description of how these branches converge and contribute to whisker movement is lacking. Methods Eight rats underwent isolated transection of either the buccal or mandibular branch and subsequent opposite branch transection. Whisking function was analyzed following both transections. Anatomical measurements, and video recording of stimulation to individual branches, were taken from both facial nerves in 10 rats. Results Normal to near-normal whisking was demonstrated after isolated branch transection. Following transection of both branches whisking was eliminated. The buccal and mandibular branches form a convergence just proximal to the whisker-pad, named the “distal pes.” Distal to this convergence, we identified consistent anatomy that demonstrated cross-innervation. Conclusion The overlap of efferent supply to the whisker pad must be considered when studying facial nerve regeneration in the rat facial nerve model. PMID:22499096

  2. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  3. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  4. Magnetic resonance imaging segmentation techniques using batch-type learning vector quantization algorithms.

    PubMed

    Yang, Miin-Shen; Lin, Karen Chia-Ren; Liu, Hsiu-Chih; Lirng, Jiing-Feng

    2007-02-01

    In this article, we propose batch-type learning vector quantization (LVQ) segmentation techniques for the magnetic resonance (MR) images. Magnetic resonance imaging (MRI) segmentation is an important technique to differentiate abnormal and normal tissues in MR image data. The proposed LVQ segmentation techniques are compared with the generalized Kohonen's competitive learning (GKCL) methods, which were proposed by Lin et al. [Magn Reson Imaging 21 (2003) 863-870]. Three MRI data sets of real cases are used in this article. The first case is from a 2-year-old girl who was diagnosed with retinoblastoma in her left eye. The second case is from a 55-year-old woman who developed complete left side oculomotor palsy immediately after a motor vehicle accident. The third case is from an 84-year-old man who was diagnosed with Alzheimer disease (AD). Our comparisons are based on sensitivity of algorithm parameters, the quality of MRI segmentation with the contrast-to-noise ratio and the accuracy of the region of interest tissue. Overall, the segmentation results from batch-type LVQ algorithms present good accuracy and quality of the segmentation images, and also flexibility of algorithm parameters in all the comparison consequences. The results support that the proposed batch-type LVQ algorithms are better than the previous GKCL algorithms. Specifically, the proposed fuzzy-soft LVQ algorithm works well in segmenting AD MRI data set to accurately measure the hippocampus volume in AD MR images.

  5. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  6. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  7. A multiple-kernel fuzzy C-means algorithm for image segmentation.

    PubMed

    Chen, Long; Chen, C L Philip; Lu, Mingzhu

    2011-10-01

    In this paper, a generalized multiple-kernel fuzzy C-means (FCM) (MKFCM) methodology is introduced as a framework for image-segmentation problems. In the framework, aside from the fact that the composite kernels are used in the kernel FCM (KFCM), a linear combination of multiple kernels is proposed and the updating rules for the linear coefficients of the composite kernel are derived as well. The proposed MKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in image-segmentation problems. That is, different pixel information represented by different kernels is combined in the kernel space to produce a new kernel. It is shown that two successful enhanced KFCM-based image-segmentation algorithms are special cases of MKFCM. Several new segmentation algorithms are also derived from the proposed MKFCM framework. Simulations on the segmentation of synthetic and medical images demonstrate the flexibility and advantages of MKFCM-based approaches.

  8. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  9. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  10. Detection and Segmentation of Erythrocytes in Blood Smear Images Using a Line Operator and Watershed Algorithm

    PubMed Central

    Khajehpour, Hassan; Dehnavi, Alireza Mehri; Taghizad, Hossein; Khajehpour, Esmat; Naeemabadi, Mohammadreza

    2013-01-01

    Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells’ centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images. PMID:24672764

  11. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI*

    NASA Astrophysics Data System (ADS)

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey

    2012-12-01

    This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.

  12. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  13. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    PubMed

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-21

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  14. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    NASA Astrophysics Data System (ADS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  15. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  16. A review of algorithms for medical image segmentation and their applications to the female pelvic cavity.

    PubMed

    Ma, Zhen; Tavares, João Manuel R S; Jorge, Renato Natal; Mascarenhas, T

    2010-01-01

    This paper aims to make a review on the current segmentation algorithms used for medical images. Algorithms are classified according to their principal methodologies, namely the ones based on thresholds, the ones based on clustering techniques and the ones based on deformable models. The last type is focused on due to the intensive investigations into the deformable models that have been done in the last few decades. Typical algorithms of each type are discussed and the main ideas, application fields, advantages and disadvantages of each type are summarised. Experiments that apply these algorithms to segment the organs and tissues of the female pelvic cavity are presented to further illustrate their distinct characteristics. In the end, the main guidelines that should be considered for designing the segmentation algorithms of the pelvic cavity are proposed.

  17. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  18. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  19. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    NASA Astrophysics Data System (ADS)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  20. Implementation of a new segmentation algorithm using the Eye-RIS CMOS vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arena, Paolo; De Fiore, Sebastiano; Vagliasindi, Guido; Fortuna, Luigi; Arik, Sabri

    2009-05-01

    Segmentation is the process of representing a digital image into multiple meaningful regions. Since these applications require more computational power in real time applications, we have implemented a new segmentation algorithm using the capabilities of Eye-RIS Vision System to execute the algorithm in very short time. The segmentation algorithm is implemented mainly in three steps. In the first step, which is pre-processing step, the images are acquired and noise filtering through Gaussian function is performed. In the second step, Sobel operators based edge detection approach is implemented on the system. In the last step, morphologic and logic operations are used to segment the images as post processing. The experimental results performed for different images show the accuracy of the proposed segmentation algorithm. Visual inspection and timing analysis (7.83 ms, 127 frame/sec) prove that the proposed segmentation algorithm can be executed for real time video processing applications. Also, these results prove the capability of Eye-RIS Vision System for real time image processing applications

  1. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  2. Coupling Regular Tessellation with Rjmcmc Algorithm to Segment SAR Image with Unknown Number of Classes

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, Y.; Zhao, Q. H.

    2016-06-01

    This paper presents a Synthetic Aperture Radar (SAR) image segmentation approach with unknown number of classes, which is based on regular tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC') algorithm. First of all, an image domain is portioned into a set of blocks by regular tessellation. The image is modeled on the assumption that intensities of its pixels in each homogeneous region satisfy an identical and independent Gamma distribution. By Bayesian paradigm, the posterior distribution is obtained to build the region-based image segmentation model. Then, a RJMCMC algorithm is designed to simulate from the segmentation model to determine the number of homogeneous regions and segment the image. In order to further improve the segmentation accuracy, a refined operation is performed. To illustrate the feasibility and effectiveness of the proposed approach, two real SAR image is tested.

  3. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  4. LoAd: A locally adaptive cortical segmentation algorithm

    PubMed Central

    Cardoso, M. Jorge; Clarkson, Matthew J.; Ridgway, Gerard R.; Modat, Marc; Fox, Nick C.; Ourselin, Sebastien

    2012-01-01

    Thickness measurements of the cerebral cortex can aid diagnosis and provide valuable information about the temporal evolution of diseases such as Alzheimer's, Huntington's, and schizophrenia. Methods that measure the thickness of the cerebral cortex from in-vivo magnetic resonance (MR) images rely on an accurate segmentation of the MR data. However, segmenting the cortex in a robust and accurate way still poses a challenge due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cortical folds. Beginning with a well-established probabilistic segmentation model with anatomical tissue priors, we propose three post-processing refinements: a novel modification of the prior information to reduce segmentation bias; introduction of explicit partial volume classes; and a locally varying MRF-based model for enhancement of sulci and gyri. Experiments performed on a new digital phantom, on BrainWeb data and on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) show statistically significant improvements in Dice scores and PV estimation (p<10−3) and also increased thickness estimation accuracy when compared to three well established techniques. PMID:21316470

  5. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  6. Genetic algorithm based deliverable segments optimization for static intensity-modulated radiotherapy.

    PubMed

    Li, Yongjie; Yao, Jonathan; Yao, Dezhong

    2003-10-21

    The static delivery technique (also called step-and-shoot technique) has been widely used in intensity-modulated radiotherapy (IMRT) because of the simple delivery and easy quality assurance. Conventional static IMRT consists of two steps: first to calculate the intensity-modulated beam profiles using an inverse planning algorithm, and then to translate these profiles into a series of uniform segments using a leaf-sequencing tool. In order to simplify the procedure and shorten the treatment time of the static mode, an efficient technique, called genetic algorithm based deliverable segments optimization (GADSO), is developed in our work, which combines these two steps into one. Taking the pre-defined beams and the total number of segments per treatment as input, the number of segments for each beam, the segment shapes and weights are determined automatically. A group of interim modulated beam profiles quickly calculated using a conjugate gradient (CG) method are used to determine the segment number for each beam and to initialize segment shapes. A modified genetic algorithm based on a two-dimensional binary coding scheme is used to optimize the segment shapes, and a CG method is used to optimize the segment weights. The physical characters of a multileaf collimator, such as the leaves interdigitation limitation and leaves maximum over-travel distance, are incorporated into the optimization. The algorithm is applied to some examples and the results demonstrate that GADSO is able to produce highly conformal dose distributions using 20-30 deliverable segments per treatment within a clinically acceptable computation time.

  7. Moving object segmentation algorithm based on cellular neural networks in the H.264 compressed domain

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Chen, Yaowu; Tian, Xiang

    2009-07-01

    A cellular neural network (CNN)-based moving object segmentation algorithm in the H.264 compressed domain is proposed. This algorithm mainly utilizes motion vectors directly extracted from H.264 bitstreams. To improve the robustness of the motion vector information, the intramodes in I-frames are used for smooth and nonsmooth region classification, and the residual coefficient energy of P-frames is used to update the classification results first. Then, an adaptive motion vector filter is used according to interpartition modes. Finally, many CNN models are applied to implement moving object segmentation based on motion vector fields. Experiment results are presented to verify the efficiency and the robustness of this algorithm.

  8. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  9. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    PubMed Central

    Shen, Xuanjing; Feng, Yuncong

    2017-01-01

    Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L). Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.

  10. Phasing the mirror segments of the Keck telescopes II: the narrow-band phasing algorithm.

    PubMed

    Chanan, G; Ohara, C; Troy, M

    2000-09-01

    In a previous paper, we described a successful technique, the broadband algorithm, for phasing the primary mirror segments of the Keck telescopes to an accuracy of 30 nm. Here we describe a complementary narrow-band algorithm. Although it has a limited dynamic range, it is much faster than the broadband algorithm and can achieve an unprecedented phasing accuracy of approximately 6 nm. Cross checks between these two independent techniques validate both methods to a high degree of confidence. Both algorithms converge to the edge-minimizing configuration of the segmented primary mirror, which is not the same as the overall wave-front-error-minimizing configuration, but we demonstrate that this distinction disappears as the segment aberrations are reduced to zero.

  11. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  12. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  13. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: A performance evaluation study.

    PubMed

    Vaitheeswaran, Ranganathan; Sathiya, Narayanan V K; Bhangle, Janhavi R; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-04-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm; (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm; (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) [~ 2% - 5% improvement] and Homogeneity Index (HI) [~ 4% - 10% improvement] as compared to GEM and FSA algorithms; (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  14. Algorithms for automatic segmentation of bovine embryos produced in vitro

    NASA Astrophysics Data System (ADS)

    Melo, D. H.; Nascimento, M. Z.; Oliveira, D. L.; Neves, L. A.; Annes, K.

    2014-03-01

    In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%.

  15. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  16. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  17. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  18. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  19. Computer-assisted liver tumor surgery using a novel semiautomatic and a hybrid semiautomatic segmentation algorithm.

    PubMed

    Zygomalas, Apollon; Karavias, Dionissios; Koutsouris, Dimitrios; Maroulis, Ioannis; Karavias, Dimitrios D; Giokas, Konstantinos; Megalooikonomou, Vasileios

    2016-05-01

    We developed a medical image segmentation and preoperative planning application which implements a semiautomatic and a hybrid semiautomatic liver segmentation algorithm. The aim of this study was to evaluate the feasibility of computer-assisted liver tumor surgery using these algorithms which are based on thresholding by pixel intensity value from initial seed points. A random sample of 12 patients undergoing elective high-risk hepatectomies at our institution was prospectively selected to undergo computer-assisted surgery using our algorithms (June 2013-July 2014). Quantitative and qualitative evaluation was performed. The average computer analysis time (segmentation, resection planning, volumetry, visualization) was 45 min/dataset. The runtime for the semiautomatic algorithm was <0.2 s/slice. Liver volumetric segmentation using the hybrid method was achieved in 12.9 s/dataset (SD ± 6.14). Mean similarity index was 96.2 % (SD ± 1.6). The future liver remnant volume calculated by the application showed a correlation of 0.99 to that calculated using manual boundary tracing. The 3D liver models and the virtual liver resections had an acceptable coincidence with the real intraoperative findings. The patient-specific 3D models produced using our semiautomatic and hybrid semiautomatic segmentation algorithms proved to be accurate for the preoperative planning in liver tumor surgery and effectively enhanced the intraoperative medical image guidance.

  20. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  1. Side scan sonar image segmentation based on neutrosophic set and quantum-behaved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Wang, Xiao; Zhang, Hongmei; Hu, Jun; Jian, Xiaomin

    2016-09-01

    To fulfill side scan sonar (SSS) image segmentation accurately and efficiently, a novel segmentation algorithm based on neutrosophic set (NS) and quantum-behaved particle swarm optimization (QPSO) is proposed in this paper. Firstly, the neutrosophic subset images are obtained by transforming the input image into the NS domain. Then, a co-occurrence matrix is accurately constructed based on these subset images, and the entropy of the gray level image is described to serve as the fitness function of the QPSO algorithm. Moreover, the optimal two-dimensional segmentation threshold vector is quickly obtained by QPSO. Finally, the contours of the interested target are segmented with the threshold vector and extracted by the mathematic morphology operation. To further improve the segmentation efficiency, the single threshold segmentation, an alternative algorithm, is recommended for the shadow segmentation by considering the gray level characteristics of the shadow. The accuracy and efficiency of the proposed algorithm are assessed with experiments of SSS image segmentation.

  2. Comparing Bayesian neural network algorithms for classifying segmented outdoor images.

    PubMed

    Vivarelli, F; Williams, C K

    2001-05-01

    In this paper we investigate the Bayesian training of neural networks for region labelling of segmented outdoor scenes; the data are drawn from the Sowerby Image Database of British Aerospace. Neural networks are trained with two Bayesian methods, (i) the evidence framework of MacKay (1992a,b) and (ii) a Markov Chain Monte Carlo method due to Neal (1996). The performance of the two methods is compared to evaluating the empirical learning curves of neural networks trained with the two methods. We also investigate the use of the Automatic Relevance Determination method for input feature selection.

  3. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  4. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    PubMed Central

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-01-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935

  5. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.

    PubMed

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-07

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  6. A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set.

    PubMed

    Guo, Yanhui; Şengür, Abdulkadir; Tian, Jia-Wei

    2016-01-01

    Breast ultrasound (BUS) image segmentation is a challenging task due to the speckle noise, poor quality of the ultrasound images and size and location of the breast lesions. In this paper, we propose a new BUS image segmentation algorithm based on neutrosophic similarity score (NSS) and level set algorithm. At first, the input BUS image is transferred to the NS domain via three membership subsets T, I and F, and then, a similarity score NSS is defined and employed to measure the belonging degree to the true tumor region. Finally, the level set method is used to segment the tumor from the background tissue region in the NSS image. Experiments have been conducted on a variety of clinical BUS images. Several measurements are used to evaluate and compare the proposed method's performance. The experimental results demonstrate that the proposed method is able to segment the BUS images effectively and accurately.

  7. On the Automated Segmentation of Epicardial and Mediastinal Cardiac Adipose Tissues Using Classification Algorithms.

    PubMed

    Rodrigues, Érick Oliveira; Cordeiro de Morais, Felipe Fernandes; Conci, Aura

    2015-01-01

    The quantification of fat depots on the surroundings of the heart is an accurate procedure for evaluating health risk factors correlated with several diseases. However, this type of evaluation is not widely employed in clinical practice due to the required human workload. This work proposes a novel technique for the automatic segmentation of cardiac fat pads. The technique is based on applying classification algorithms to the segmentation of cardiac CT images. Furthermore, we extensively evaluate the performance of several algorithms on this task and discuss which provided better predictive models. Experimental results have shown that the mean accuracy for the classification of epicardial and mediastinal fats has been 98.4% with a mean true positive rate of 96.2%. On average, the Dice similarity index, regarding the segmented patients and the ground truth, was equal to 96.8%. Therfore, our technique has achieved the most accurate results for the automatic segmentation of cardiac fats, to date.

  8. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    PubMed Central

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  9. A graph-based segmentation algorithm for tree crown extraction using airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Strîmbu, Victor F.; Strîmbu, Bogdan M.

    2015-06-01

    This work proposes a segmentation method that isolates individual tree crowns using airborne LiDAR data. The proposed approach captures the topological structure of the forest in hierarchical data structures, quantifies topological relationships of tree crown components in a weighted graph, and finally partitions the graph to separate individual tree crowns. This novel bottom-up segmentation strategy is based on several quantifiable cohesion criteria that act as a measure of belief on weather two crown components belong to the same tree. An added flexibility is provided by a set of weights that balance the contribution of each criterion, thus effectively allowing the algorithm to adjust to different forest structures. The LiDAR data used for testing was acquired in Louisiana, inside the Clear Creek Wildlife management area with a RIEGL LMS-Q680i airborne laser scanner. Three 1 ha forest areas of different conditions and increasing complexity were segmented and assessed in terms of an accuracy index (AI) accounting for both omission and commission. The three areas were segmented under optimum parameterization with an AI of 98.98%, 92.25% and 74.75% respectively, revealing the excellent potential of the algorithm. When segmentation parameters are optimized locally using plot references the AI drops to 98.23%, 89.24%, and 68.04% on average with plot sizes of 1000 m2 and 97.68%, 87.78% and 61.1% on average with plot sizes of 500 m2. More than introducing a segmentation algorithm, this paper proposes a powerful framework featuring flexibility to support a series of segmentation methods including some of those recurring in the tree segmentation literature. The segmentation method may extend its applications to any data of topological nature or data that has a topological equivalent.

  10. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard.

    PubMed

    Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T

    2012-07-07

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique.

  11. Nonlinear physical segmentation algorithm for determining the layer boundary from lidar signal.

    PubMed

    Mao, Feiyue; Li, Jun; Li, Chen; Gong, Wei; Min, Qilong; Wang, Wei

    2015-11-30

    Layer boundary (base and top) detection is a basic problem in lidar data processing, the results of which are used as inputs of optical properties retrieval. However, traditional algorithms not only require manual intervention but also rely heavily on the signal-to-noise ratio. Therefore, we propose a robust and automatic algorithm for layer detection based on a novel algorithm for lidar signal segmentation and representation. Our algorithm is based on the lidar equation and avoids most of the limitations of the traditional algorithms. Testing of the simulated and real signals shows that the algorithm is able to position the base and top accurately even with a low signal to noise ratio. Furthermore, the results of the classification are accurate and satisfactory. The experimental results confirm that our algorithm can be used for automatic detection, retrieval, and analysis of lidar data sets.

  12. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  13. A pulse coupled neural network segmentation algorithm for reflectance confocal images of epithelial tissue.

    PubMed

    Harris, Meagan A; Van, Andrew N; Malik, Bilal H; Jabbour, Joey M; Maitland, Kristen C

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard.

  14. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-02-28

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  15. An Unsupervised Algorithm for Segmenting Categorical Timeseries into Episodes

    DTIC Science & Technology

    2002-01-01

    encoded in in the standard GB-scheme. Franz Kafka’s The Castle in the original German comprised the final text. For comparison purposes we selected the...Orwell corpus, and 10% of the Kafka corpus, so it is not surprising that the algorithm performs worst on the Chinese corpus and best on the Kafka ...64 .34 .37 .53 .10 Chinese .57 .42 .07 .13 .57 .30 Table 2. Results of running Voting-Experts on Franz Kafka’s The Castle, Orwell’s 1984, a subset of

  16. A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data.

    PubMed

    Ahmed, Mohamed N; Yamany, Sameh M; Mohamed, Nevin; Farag, Aly A; Moriarty, Thomas

    2002-03-01

    In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.

  17. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi

  18. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  19. Generalized rough fuzzy c-means algorithm for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Sun, Quansen; Xia, Yong; Chen, Qiang; Xia, Deshen; Feng, Dagan

    2012-11-01

    Fuzzy sets and rough sets have been widely used in many clustering algorithms for medical image segmentation, and have recently been combined together to better deal with the uncertainty implied in observed image data. Despite of their wide spread applications, traditional hybrid approaches are sensitive to the empirical weighting parameters and random initialization, and hence may produce less accurate results. In this paper, a novel hybrid clustering approach, namely the generalized rough fuzzy c-means (GRFCM) algorithm is proposed for brain MR image segmentation. In this algorithm, each cluster is characterized by three automatically determined rough-fuzzy regions, and accordingly the membership of each pixel is estimated with respect to the region it locates. The importance of each region is balanced by a weighting parameter, and the bias field in MR images is modeled by a linear combination of orthogonal polynomials. The weighting parameter estimation and bias field correction have been incorporated into the iterative clustering process. Our algorithm has been compared to the existing rough c-means and hybrid clustering algorithms in both synthetic and clinical brain MR images. Experimental results demonstrate that the proposed algorithm is more robust to the initialization, noise, and bias field, and can produce more accurate and reliable segmentations.

  20. Shack-Hartmann mask/pupil registration algorithm for wavefront sensing in segmented mirror telescopes.

    PubMed

    Piatrou, Piotr; Chanan, Gary

    2013-11-10

    Shack-Hartmann wavefront sensing in general requires careful registration of the reimaged telescope primary mirror to the Shack-Hartmann mask or lenslet array. The registration requirements are particularly demanding for applications in which segmented mirrors are phased using a physical optics generalization of the Shack-Hartmann test. In such cases the registration tolerances are less than 0.1% of the diameter of the primary mirror. We present a pupil registration algorithm suitable for such high accuracy applications that is based on the one used successfully for phasing the segments of the Keck telescopes. The pupil is aligned in four degrees of freedom (translations, rotation, and magnification) by balancing the intensities of subimages formed by small subapertures that straddle the periphery of the mirror. We describe the algorithm in general terms and then in the specific context of two very different geometries: the 492 segment Thirty Meter Telescope, and the seven "segment" Giant Magellan Telescope. Through detailed simulations we explore the accuracy of the algorithm and its sensitivity to such effects as cross talk, noise/counting statistics, atmospheric scintillation, and segment reflectivity variations.

  1. An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.

    NASA Astrophysics Data System (ADS)

    Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.

    2016-04-01

    In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .

  2. Individual tooth region segmentation using modified watershed algorithm with morphological characteristic.

    PubMed

    Na, Sung Dae; Lee, Gihyoun; Lee, Jyung Hyun; Kim, Myoung Nam

    2014-01-01

    In this paper, a new method for individual tooth segmentation was proposed. The proposed method is composed of enhancement and extraction of boundary and seed of watershed algorithm using trisection areas by morphological characteristic of teeth. The watershed algorithm is one of the conventional methods for tooth segmentation; however, the method has some problems. First, molar region detection ratio is reduced because of oral structure features that is low intensities in molar region. Second, inaccurate segmentation occurs in incisor region owing to specular reflection. To solve the problems, the trisection method using morphological characteristic was proposed, where three tooth areas are made using ratio of entire tooth to each tooth. Moreover, the enhancement is to improve the intensity of molar using the proposed method. In addition, boundary and seed of watershed are extracted using trisection areas applied other parameters each area. Finally, individual tooth segmentation was performed using extracted boundary and seed. Furthermore, the proposed method was compared with conventional methods to confirm its efficiency. As a result, the proposed method was demonstrated to have higher detection ratio, better over segmentation, and overlap segmentation than conventional methods.

  3. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  4. Surgical wound segmentation based on adaptive threshold edge detection and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shih, Hsueh-Fu; Ho, Te-Wei; Hsu, Jui-Tse; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    Postsurgical wound care has a great impact on patients' prognosis. It often takes few days, even few weeks, for the wound to stabilize, which incurs a great cost of health care and nursing resources. To assess the wound condition and diagnosis, it is important to segment out the wound region for further analysis. However, the scenario of this strategy often consists of complicated background and noise. In this study, we propose a wound segmentation algorithm based on Canny edge detector and genetic algorithm with an unsupervised evaluation function. The results were evaluated by the 112 clinical images, and 94.3% of images were correctly segmented. The judgment was based on the evaluation of experimented medical doctors. This capability to extract complete wound regions, makes it possible to conduct further image analysis such as intelligent recovery evaluation and automatic infection requirements.

  5. A martian case study of segmenting images automatically for granulometry and sedimentology, Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Karunatillake, Suniti; McLennan, Scott M.; Herkenhoff, Kenneth E.; Husch, Jonathan M.; Hardgrove, Craig; Skok, J. R.

    2014-02-01

    In planetary exploration, delineating individual grains in images via segmentation is a key path to sedimentological comparisons with the extensive terrestrial literature. Samples that contain a substantial fine grain component, common at Meridiani and Gusev at Mars, would involve prohibitive effort if attempted manually. Unavailability of physical samples also precludes standard terrestrial methods such as sieving. Furthermore, planetary scientists have been thwarted by the dearth of segmentation algorithms customized for planetary applications, including Mars, and often rely on sub-optimal solutions adapted from medical software. We address this with an original algorithm optimized to segment whole images from the Microscopic Imager of the Mars Exploration Rovers. While our code operates with minimal human guidance, its default parameters can be modified easily for different geologic settings and imagers on Earth and other planets, such as the Curiosity Rover’s Mars Hand Lens Instrument. We assess the algorithm’s robustness in a companion work.

  6. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    PubMed Central

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-01-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation). PMID:28181546

  7. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  8. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  9. A joint shape evolution approach to medical image segmentation using expectation-maximization algorithm.

    PubMed

    Farzinfar, Mahshid; Teoh, Eam Khwang; Xue, Zhong

    2011-11-01

    This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.

  10. A unifying graph-cut image segmentation framework: algorithms it encompasses and equivalences among them

    NASA Astrophysics Data System (ADS)

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.

    2012-02-01

    We present a general graph-cut segmentation framework GGC, in which the delineated objects returned by the algorithms optimize the energy functions associated with the lp norm, 1 <= p <= ∞. Two classes of well known algorithms belong to GGC: the standard graph cut GC (such as the min-cut/max-flow algorithm) and the relative fuzzy connectedness algorithms RFC (including iterative RFC, IRFC). The norm-based description of GGC provides more elegant and mathematically better recognized framework of our earlier results from [18, 19]. Moreover, it allows precise theoretical comparison of GGC representable algorithms with the algorithms discussed in a recent paper [22] (min-cut/max-flow graph cut, random walker, shortest path/geodesic, Voronoi diagram, power watershed/shortest path forest), which optimize, via lp norms, the intermediate segmentation step, the labeling of scene voxels, but for which the final object need not optimize the used lp energy function. Actually, the comparison of the GGC representable algorithms with that encompassed in the framework described in [22] constitutes the main contribution of this work.

  11. Algorithm for the identification of malfunctioning sensors in the control systems of segmented mirror telescopes.

    PubMed

    Chanan, Gary; Nelson, Jerry

    2009-11-10

    The active control systems of segmented mirror telescopes are vulnerable to a malfunction of a few (or even one) of their segment edge sensors, the effects of which can propagate through the entire system and seriously compromise the overall telescope image quality. Since there are thousands of such sensors in the extremely large telescopes now under development, it is essential to develop fast and efficient algorithms that can identify bad sensors so that they can be removed from the control loop. Such algorithms are nontrivial; for example, a simple residual-to-the-fit test will often fail to identify a bad sensor. We propose an algorithm that can reliably identify a single bad sensor and we extend it to the more difficult case of multiple bad sensors. Somewhat surprisingly, the identification of a fixed number of bad sensors does not necessarily become more difficult as the telescope becomes larger and the number of sensors in the control system increases.

  12. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  13. Malleable Fuzzy Local Median C Means Algorithm for Effective Biomedical Image Segmentation

    NASA Astrophysics Data System (ADS)

    Rajendran, Arunkumar; Balakrishnan, Nagaraj; Varatharaj, Mithya

    2016-12-01

    The traditional way of clustering plays an effective role in the field of segmentation which was developed to be more effective and also in the recent development the extraction of contextual information can be processed with ease. This paper presents a modified Fuzzy C-Means (FCM) algorithm that provides the better segmentation in the contour grayscale regions of the biomedical images where effective cluster is needed. Malleable Fuzzy Local Median C-Means (M-FLMCM) is the proposed algorithm, proposed to overcome the disadvantage of the traditional FCM method in which the convergence time requirement is more, lack of ability to remove the noise, and the inability to cluster the contour region such as images. M-FLMCM shows promising results in the experiment with real-world biomedical images. The experiment results, with 96 % accuracy compared to the other algorithms.

  14. GPU-based acceleration of an automatic white matter segmentation algorithm using CUDA.

    PubMed

    Labra, Nicole; Figueroa, Miguel; Guevara, Pamela; Duclap, Delphine; Hoeunou, Josselin; Poupon, Cyril; Mangin, Jean-Francois

    2013-01-01

    This paper presents a parallel implementation of an algorithm for automatic segmentation of white matter fibers from tractography data. We execute the algorithm in parallel using a high-end video card with a Graphics Processing Unit (GPU) as a computation accelerator, using the CUDA language. By exploiting the parallelism and the properties of the memory hierarchy available on the GPU, we obtain a speedup in execution time of 33.6 with respect to an optimized sequential version of the algorithm written in C, and of 240 with respect to the original Python/C++ implementation. The execution time is reduced from more than two hours to only 35 seconds for a subject dataset of 800,000 fibers, thus enabling applications that use interactive segmentation and visualization of small to medium-sized tractography datasets.

  15. The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.

    PubMed

    Comer, M L; Delp, E J

    2000-01-01

    In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

  16. An improved segmentation algorithm to detect moving object in video sequences

    NASA Astrophysics Data System (ADS)

    Li, Jinkui; Sang, Xinzhu; Wang, Yongqiang; Yan, Binbin; Yu, Chongxiu

    2010-11-01

    The segmentation of moving object in video sequences is attracting more and more attention because of its important role in various camera video applications, such as video surveillance, traffic monitoring, people tracking. and so on. Conventional segmentation algorithms can be divided into two classes. One class is based on spatial homogeneity, which results in the promising output. However, the computation is too complex and heavy to be unsuitable to real-time applications. The other class utilizes change detection as the segmentation standard to extract the moving object. Typical approaches include frame difference, background subtraction and optical flow. A novel algorithm based on adaptive symmetrical difference and background subtraction is proposed. Firstly, the moving object mask is detected through the adaptive symmetrical difference, and the contour of the mask is extracted. And then, the adaptive background subtraction is carried out in the acquired region to extract the accurate moving object. Morphological operation and shadow cancellation are adopted to refine the result. Experimental results show that the algorithm is robust and effective in improving the segmentation accuracy.

  17. Feature measures for the segmentation of neuronal membrane using a machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Iftikhar, Saadia; Godil, Afzal

    2013-12-01

    In this paper, we present a Support Vector Machine (SVM) based pixel classifier for a semi-automated segmentation algorithm to detect neuronal membrane structures in stacks of electron microscopy images of brain tissue samples. This algorithm uses high-dimensional feature spaces extracted from center-surrounded patches, and some distinct edge sensitive features for each pixel in the image, and a training dataset for the segmentation of neuronal membrane structures and background. Some threshold conditions are later applied to remove small regions, which are below a certain threshold criteria, and morphological operations, such as the filling of the detected objects, are done to get compactness in the objects. The performance of the segmentation method is calculated on the unseen data by using three distinct error measures: pixel error, wrapping error, and rand error, and also a pixel by pixel accuracy measure with their respective ground-truth. The trained SVM classifier achieves the best precision level in these three distinct errors at 0.23, 0.016 and 0.15, respectively; while the best accuracy using pixel by pixel measure reaches 77% on the given dataset. The results presented here are one step further towards exploring possible ways to solve these hard problems, such as segmentation in medical image analysis. In the future, we plan to extend it as a 3D segmentation approach for 3D datasets to not only retain the topological structures in the dataset but also for the ease of further analysis.

  18. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  19. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  20. Enhancement dark channel algorithm of color fog image based on the local segmentation

    NASA Astrophysics Data System (ADS)

    Yun, Lijun; Gao, Yin; Shi, Jun-sheng; Xu, Ling-zhang

    2015-04-01

    The classical dark channel theory algorithm has yielded good results in the processing of single fog image, but in some larger contrast regions, it appears image hue, brightness and saturation distortion problems to a certain degree, and also produces halo phenomenon. In the view of the above situation, through a lot of experiments, this paper has found some factors causing the halo phenomenon. The enhancement dark channel algorithm of color fog image based on the local segmentation is proposed. On the basis of the dark channel theory, first of all, the classic dark channel theory of mathematical model is modified, which is mainly to correct the brightness and saturation of image. Then, according to the local adaptive segmentation theory, it process the block of image, and overlap the local image. On the basis of the statistical rules, it obtains each pixel value from the segmentation processing, so as to obtain the local image. At last, using the dark channel theory, it achieves the enhanced fog image. Through the subjective observation and objective evaluation, the algorithm is better than the classic dark channel algorithm in the overall and details.

  1. Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar; Cunefare, David; Shen, Liangbo; Toth, Cynthia; Farsiu, Sina; Izatt, Joseph A.

    2016-03-01

    Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.

  2. Comparison of different automatic threshold algorithms for image segmentation in microscope images

    NASA Astrophysics Data System (ADS)

    Boecker, Wilfried; Muller, W.-U.; Streffer, Christian

    1995-08-01

    Image segmentation is almost always a necessary step in image processing. The employed threshold algorithms are based on the detection of local minima in the gray level histograms of the entire image. In automatic cell recognition equipment, like chromosome analysis or micronuclei counting systems, flexible and adaptive thresholds are required to consider variation in gray level intensities of the background and of the specimen. We have studied three different methods of threshold determination: 1) a statistical procedure, which uses the interclass entropy maximization of the gray level histogram. The iterative algorithm can be used for multithreshold segmentation. The contribution of iteration step 'i' is 2+i-1) number of thresholds; 2) a numerical approach, which detects local minima in the gray level histogram. The algorithm must be tailored and optimized for specific applications like cell recognition with two different thresholds for cell nuclei and cell cytoplasm segmentation; 3) an artificial neural network, which is trained with learning sets of image histograms and the corresponding interactively determined thresholds. We have investigated feed forward networks with one and two layers, respectively. The gray level frequencies are used as inputs for the net. The number of different thresholds per image determines the output channels. We have tested and compared these different threshold algorithms for practical use in fluorescence microscopy as well as in bright field microscopy. The implementation and the results are presented and discussed.

  3. Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy.

    PubMed

    Salem, Sameh A; Salem, Nancy M; Nandi, Asoke K

    2007-03-01

    In this paper, segmentation of blood vessels from colour retinal images using a novel clustering algorithm with a partial supervision strategy is proposed. The proposed clustering algorithm, which is a RAdius based Clustering ALgorithm (RACAL), uses a distance based principle to map the distributions of the data by utilising the premise that clusters are determined by a distance parameter, without having to specify the number of clusters. Additionally, the proposed clustering algorithm is enhanced with a partial supervision strategy and it is demonstrated that it is able to segment blood vessels of small diameters and low contrasts. Results are compared with those from the KNN classifier and show that the proposed RACAL performs better than the KNN in case of abnormal images as it succeeds in segmenting small and low contrast blood vessels, while it achieves comparable results for normal images. For automation process, RACAL can be used as a classifier and results show that it performs better than the KNN classifier in both normal and abnormal images.

  4. Hepatic Arterial Configuration in Relation to the Segmental Anatomy of the Liver; Observations on MDCT and DSA Relevant to Radioembolization Treatment

    SciTech Connect

    Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den

    2015-02-15

    PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.

  5. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  6. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  7. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  8. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  9. Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Aline, Frank

    2013-01-01

    We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist with matching the cost-effective insurance payment models to the beneficiary micro-segments.

  10. [Automatic segmentation of clustered breast cancer cells based on modified watershed algorithm and concavity points searching].

    PubMed

    Tong, Zhen; Pu, Lixin; Dong, Fangjie

    2013-08-01

    As a common malignant tumor, breast cancer has seriously affected women's physical and psychological health even threatened their lives. Breast cancer has even begun to show a gradual trend of high incidence in some places in the world. As a kind of common pathological assist diagnosis technique, immunohistochemical technique plays an important role in the diagnosis of breast cancer. Usually, Pathologists isolate positive cells from the stained specimen which were processed by immunohistochemical technique and calculate the ratio of positive cells which is a core indicator of breast cancer in diagnosis. In this paper, we present a new algorithm which was based on modified watershed algorithm and concavity points searching to identify the positive cells and segment the clustered cells automatically, and then realize automatic counting. By comparison of the results of our experiments with those of other methods, our method can exactly segment the clustered cells without losing any geometrical cell features and give the exact number of separating cells.

  11. Research on algorithm about content-based segmentation and spatial transformation for stereo panorama

    NASA Astrophysics Data System (ADS)

    Li, Zili; Xia, Xuezhi; Zhu, Guangxi; Zhu, Yaoting

    2004-03-01

    The principle to construct G&IBMR virtual scene based on stereo panorama with binocular stereovision was put forward. Closed cubic B-splines have been used for content-based segmentation to virtual objects of stereo panorama and all objects in current viewing frustum would be ordered in current object linked list (COLL) by their depth information. The formula has been educed to calculate the depth information of a point in virtual scene by the parallax based on a parallel binocular vision model. A bilinear interpolation algorithm has been submitted to deform the segmentation template and take image splicing between three key positions. We also use the positional and directional transformation of binocular virtual camera bound to user avatar to drive the transformation of stereo panorama so as to achieve real-time consistency about perspective relationship and image masking. The experimental result has shown that the algorithm in this paper is effective and feasible.

  12. A Segmentation Algorithm for X-ray 3D Angiography and Vessel Catheterization

    SciTech Connect

    Franchi, Danilo; Rosa, Luigi; Placidi, Giuseppe

    2008-11-06

    Vessel Catheterization is a clinical procedure usually performed by a specialist by means of X-ray fluoroscopic guide with contrast-media. In the present paper, we present a simple and efficient algorithm for vessel segmentation which allows vessel separation and extraction from the background (noise and signal coming from other organs). This would reduce the number of projections (X-ray scans) to reconstruct a complete and accurate 3D vascular model and the radiological risk, in particular for the patient. In what follows, the algorithm is described and some preliminary experimental results are reported illustrating the behaviour of the proposed method.

  13. US-Cut: interactive algorithm for rapid detection and segmentation of liver tumors in ultrasound acquisitions

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Chen, Xiaojun; Zoller, Wolfram G.; Schmalstieg, Dieter; Hann, Alexander

    2016-04-01

    Ultrasound (US) is the most commonly used liver imaging modality worldwide. It plays an important role in follow-up of cancer patients with liver metastases. We present an interactive segmentation approach for liver tumors in US acquisitions. Due to the low image quality and the low contrast between the tumors and the surrounding tissue in US images, the segmentation is very challenging. Thus, the clinical practice still relies on manual measurement and outlining of the tumors in the US images. We target this problem by applying an interactive segmentation algorithm to the US data, allowing the user to get real-time feedback of the segmentation results. The algorithm has been developed and tested hand-in-hand by physicians and computer scientists to make sure a future practical usage in a clinical setting is feasible. To cover typical acquisitions from the clinical routine, the approach has been evaluated with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic (darker) or isoechoic (similar) in comparison to the surrounding liver tissue. Due to the interactive real-time behavior of the approach, it was possible even in difficult cases to find satisfying segmentations of the tumors within seconds and without parameter settings, and the average tumor deviation was only 1.4mm compared with manual measurements. However, the long term goal is to ease the volumetric acquisition of liver tumors in order to evaluate for treatment response. Additional aim is the registration of intraoperative US images via the interactive segmentations to the patient's pre-interventional CT acquisitions.

  14. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  15. Therapy Operating Characteristic (TOC) Curves and their Application to the Evaluation of Segmentation Algorithms.

    PubMed

    Barrett, Harrison H; Wilson, Donald W; Kupinski, Matthew A; Aguwa, Kasarachi; Ewell, Lars; Hunter, Robert; Müller, Stefan

    2010-01-01

    This paper presents a general framework for assessing imaging systems and image-analysis methods on the basis of therapeutic rather than diagnostic efficacy. By analogy to receiver operating characteristic (ROC) curves, it utilizes the Therapy Operating Characteristic or TOC curve, which is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall level of a radiotherapy treatment beam is varied. The proposed figure of merit is the area under the TOC, denoted AUTOC. If the treatment planning algorithm is held constant, AUTOC is a metric for the imaging and image-analysis components, and in particular for segmentation algorithms that are used to delineate tumors and normal tissues. On the other hand, for a given set of segmented images, AUTOC can also be used as a metric for the treatment plan itself. A general mathematical theory of TOC and AUTOC is presented and then specialized to segmentation problems. Practical approaches to implementation of the theory in both simulation and clinical studies are presented. The method is illustrated with a a brief study of segmentation methods for prostate cancer.

  16. Evolutionary algorithms with segment-based search for multiobjective optimization problems.

    PubMed

    Li, Miqing; Yang, Shengxiang; Li, Ke; Liu, Xiaohui

    2014-08-01

    This paper proposes a variation operator, called segment-based search (SBS), to improve the performance of evolutionary algorithms on continuous multiobjective optimization problems. SBS divides the search space into many small segments according to the evolutionary information feedback from the set of current optimal solutions. Two operations, micro-jumping and macro-jumping, are implemented upon these segments in order to guide an efficient information exchange among "good" individuals. Moreover, the running of SBS is adaptive according to the current evolutionary status. SBS is activated only when the population evolves slowly, depending on general genetic operators (e.g., mutation and crossover). A comprehensive set of 36 test problems is employed for experimental verification. The influence of two algorithm settings (i.e., the dimensionality and boundary relaxation strategy) and two probability parameters in SBS (i.e., the SBS rate and micro-jumping proportion) are investigated in detail. Moreover, an empirical comparative study with three representative variation operators is carried out. Experimental results show that the incorporation of SBS into the optimization process can improve the performance of evolutionary algorithms for multiobjective optimization problems.

  17. Statistical Learning Algorithm for In-situ and Invasive Breast Carcinoma Segmentation

    PubMed Central

    Jayender, Jagadeesan; Gombos, Eva; Chikarmane, Sona; Dabydeen, Donnette; Jolesz, Ferenc A.; Vosburgh, Kirby G.

    2013-01-01

    DCE-MRI has proven to be a highly sensitive imaging modality in diagnosing breast cancers. However, analyzing the DCE-MRI is time-consuming and prone to errors due to the large volume of data. Mathematical models to quantify contrast perfusion, such as the Black Box methods and Pharmacokinetic analysis, are inaccurate, sensitive to noise and depend on a large number of external factors such as imaging parameters, patient physiology, arterial input function, fitting algorithms etc., leading to inaccurate diagnosis. In this paper, we have developed a novel Statistical Learning Algorithm for Tumor Segmentation (SLATS) based on Hidden Markov Models to auto-segment regions of angiogenesis, corresponding to tumor. The SLATS algorithm has been trained to identify voxels belonging to the tumor class using the time-intensity curve, first and second derivatives of the intensity curves (“velocity” and “acceleration” respectively) and a composite vector consisting of a concatenation of the intensity, velocity and acceleration vectors. The results of SLATS trained for the four vectors has been shown for 22 Invasive Ductal Carcinoma (IDC) and 19 Ductal Carcinoma In Situ (DCIS) cases. The SLATS trained for the velocity tuple shows the best performance in delineating the tumors when compared with the segmentation performed by an expert radiologist and the output of a commercially available software, CADstream. PMID:23693000

  18. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  19. An image segmentation based on a genetic algorithm for determining soil coverage by crop residues.

    PubMed

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P; Pajares, Gonzalo; del Arco, Maria J Sanchez; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm "El Encín" in Alcalá de Henares (Madrid, Spain).

  20. Automatic segmentation of the liver using multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho

    2012-02-01

    We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.

  1. Automatic segmentation algorithm for the extraction of lumen region and boundary from endoscopic images.

    PubMed

    Tian, H; Srikanthan, T; Vijayan Asari, K

    2001-01-01

    A new segmentation algorithm for lumen region detection and boundary extraction from gastro-intestinal (GI) images is presented. The proposed algorithm consists of two steps. First, a preliminary region of interest (ROI) representing the GI lumen is segmented by an adaptive progressive thresholding (APT) technique. Then, an adaptive filter, the Iris filter, is applied to the ROI to determine the actual region. It has been observed that the combined APT-Iris filter technique can enhance and detect the unclear boundaries in the lumen region of GI images and thus produces a more accurate lumen region, compared with the existing techniques. Experiments are carried out to determine the maximum error on the extracted boundary with respect to an expert-annotated boundary technique. Investigations show that, based on the experimental results obtained from 50 endoscopic images, the maximum error is reduced by up to 72 pixels for a 256 x 256 image representation compared with other existing techniques. In addition, a new boundary extraction algorithm, based on a heuristic search on the neighbourhood pixels, is employed to obtain a connected single pixel width outer boundary using two preferential sequence windows. Experimental results are also presented to justify the effectiveness of the proposed algorithm.

  2. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.

    PubMed

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.

  3. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm

    PubMed Central

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively. PMID:26284175

  4. A novel supervised trajectory segmentation algorithm identifies distinct types of human adenovirus motion in host cells.

    PubMed

    Helmuth, Jo A; Burckhardt, Christoph J; Koumoutsakos, Petros; Greber, Urs F; Sbalzarini, Ivo F

    2007-09-01

    Biological trajectories can be characterized by transient patterns that may provide insight into the interactions of the moving object with its immediate environment. The accurate and automated identification of trajectory motifs is important for the understanding of the underlying mechanisms. In this work, we develop a novel trajectory segmentation algorithm based on supervised support vector classification. The algorithm is validated on synthetic data and applied to the identification of trajectory fingerprints of fluorescently tagged human adenovirus particles in live cells. In virus trajectories on the cell surface, periods of confined motion, slow drift, and fast drift are efficiently detected. Additionally, directed motion is found for viruses in the cytoplasm. The algorithm enables the linking of microscopic observations to molecular phenomena that are critical in many biological processes, including infectious pathogen entry and signal transduction.

  5. A contiguity-enhanced k-means clustering algorithm for unsupervised multispectral image segmentation

    SciTech Connect

    Theiler, J.; Gisler, G.

    1997-07-01

    The recent and continuing construction of multi and hyper spectral imagers will provide detailed data cubes with information in both the spatial and spectral domain. This data shows great promise for remote sensing applications ranging from environmental and agricultural to national security interests. The reduction of this voluminous data to useful intermediate forms is necessary both for downlinking all those bits and for interpreting them. Smart onboard hardware is required, as well as sophisticated earth bound processing. A segmented image (in which the multispectral data in each pixel is classified into one of a small number of categories) is one kind of intermediate form which provides some measure of data compression. Traditional image segmentation algorithms treat pixels independently and cluster the pixels according only to their spectral information. This neglects the implicit spatial information that is available in the image. We will suggest a simple approach; a variant of the standard k-means algorithm which uses both spatial and spectral properties of the image. The segmented image has the property that pixels which are spatially contiguous are more likely to be in the same class than are random pairs of pixels. This property naturally comes at some cost in terms of the compactness of the clusters in the spectral domain, but we have found that the spatial contiguity and spectral compactness properties are nearly orthogonal, which means that we can make considerable improvements in the one with minimal loss in the other.

  6. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    SciTech Connect

    Bae, JangPyo; Kim, Namkug Lee, Sang Min; Seo, Joon Beom; Kim, Hee Chan

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  7. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms.

    PubMed

    Landesberger, Tatiana von; Basgier, Dennis; Becker, Meike

    2016-12-01

    The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.

  8. Evaluation of automatic neonatal brain segmentation algorithms: the NeoBrainS12 challenge.

    PubMed

    Išgum, Ivana; Benders, Manon J N L; Avants, Brian; Cardoso, M Jorge; Counsell, Serena J; Gomez, Elda Fischi; Gui, Laura; Hűppi, Petra S; Kersbergen, Karina J; Makropoulos, Antonios; Melbourne, Andrew; Moeskops, Pim; Mol, Christian P; Kuklisova-Murgasova, Maria; Rueckert, Daniel; Schnabel, Julia A; Srhoj-Egekher, Vedran; Wu, Jue; Wang, Siying; de Vries, Linda S; Viergever, Max A

    2015-02-01

    A number of algorithms for brain segmentation in preterm born infants have been published, but a reliable comparison of their performance is lacking. The NeoBrainS12 study (http://neobrains12.isi.uu.nl), providing three different image sets of preterm born infants, was set up to provide such a comparison. These sets are (i) axial scans acquired at 40 weeks corrected age, (ii) coronal scans acquired at 30 weeks corrected age and (iii) coronal scans acquired at 40 weeks corrected age. Each of these three sets consists of three T1- and T2-weighted MR images of the brain acquired with a 3T MRI scanner. The task was to segment cortical grey matter, non-myelinated and myelinated white matter, brainstem, basal ganglia and thalami, cerebellum, and cerebrospinal fluid in the ventricles and in the extracerebral space separately. Any team could upload the results and all segmentations were evaluated in the same way. This paper presents the results of eight participating teams. The results demonstrate that the participating methods were able to segment all tissue classes well, except myelinated white matter.

  9. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  10. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Astrophysics Data System (ADS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  11. Digital Terrain from a Two-Step Segmentation and Outlier-Based Algorithm

    NASA Astrophysics Data System (ADS)

    Hingee, Kassel; Caccetta, Peter; Caccetta, Louis; Wu, Xiaoliang; Devereaux, Drew

    2016-06-01

    We present a novel ground filter for remotely sensed height data. Our filter has two phases: the first phase segments the DSM with a slope threshold and uses gradient direction to identify candidate ground segments; the second phase fits surfaces to the candidate ground points and removes outliers. Digital terrain is obtained by a surface fit to the final set of ground points. We tested the new algorithm on digital surface models (DSMs) for a 9600km2 region around Perth, Australia. This region contains a large mix of land uses (urban, grassland, native forest and plantation forest) and includes both a sandy coastal plain and a hillier region (elevations up to 0.5km). The DSMs are captured annually at 0.2m resolution using aerial stereo photography, resulting in 1.2TB of input data per annum. Overall accuracy of the filter was estimated to be 89.6% and on a small semi-rural subset our algorithm was found to have 40% fewer errors compared to Inpho's Match-T algorithm.

  12. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    PubMed

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  13. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    NASA Astrophysics Data System (ADS)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  14. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  15. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  16. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells.

    PubMed

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-07-14

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture.

  17. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  18. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  19. 3-D Ultrasound Segmentation of the Placenta Using the Random Walker Algorithm: Reliability and Agreement.

    PubMed

    Stevenson, Gordon N; Collins, Sally L; Ding, Jane; Impey, Lawrence; Noble, J Alison

    2015-12-01

    Volumetric segmentation of the placenta using 3-D ultrasound is currently performed clinically to investigate correlation between organ volume and fetal outcome or pathology. Previously, interpolative or semi-automatic contour-based methodologies were used to provide volumetric results. We describe the validation of an original random walker (RW)-based algorithm against manual segmentation and an existing semi-automated method, virtual organ computer-aided analysis (VOCAL), using initialization time, inter- and intra-observer variability of volumetric measurements and quantification accuracy (with respect to manual segmentation) as metrics of success. Both semi-automatic methods require initialization. Therefore, the first experiment compared initialization times. Initialization was timed by one observer using 20 subjects. This revealed significant differences (p < 0.001) in time taken to initialize the VOCAL method compared with the RW method. In the second experiment, 10 subjects were used to analyze intra-/inter-observer variability between two observers. Bland-Altman plots were used to analyze variability combined with intra- and inter-observer variability measured by intra-class correlation coefficients, which were reported for all three methods. Intra-class correlation coefficient values for intra-observer variability were higher for the RW method than for VOCAL, and both were similar to manual segmentation. Inter-observer variability was 0.94 (0.88, 0.97), 0.91 (0.81, 0.95) and 0.80 (0.61, 0.90) for manual, RW and VOCAL, respectively. Finally, a third observer with no prior ultrasound experience was introduced and volumetric differences from manual segmentation were reported. Dice similarity coefficients for observers 1, 2 and 3 were respectively 0.84 ± 0.12, 0.94 ± 0.08 and 0.84 ± 0.11, and the mean was 0.87 ± 0.13. The RW algorithm was found to provide results concordant with those for manual segmentation and to outperform VOCAL in aspects of observer

  20. An automatic multi-lead electrocardiogram segmentation algorithm based on abrupt change detection.

    PubMed

    Illanes-Manriquez, Alfredo

    2010-01-01

    Automatic detection of electrocardiogram (ECG) waves provides important information for cardiac disease diagnosis. In this paper a new algorithm is proposed for automatic ECG segmentation based on multi-lead ECG processing. Two auxiliary signals are computed from the first and second derivatives of several ECG leads signals. One auxiliary signal is used for R peak detection and the other for ECG waves delimitation. A statistical hypothesis testing is finally applied to one of the auxiliary signals in order to detect abrupt mean changes. Preliminary experimental results show that the detected mean changes instants coincide with the boundaries of the ECG waves.

  1. Infrared active polarimetric imaging system controlled by image segmentation algorithms: application to decamouflage

    NASA Astrophysics Data System (ADS)

    Vannier, Nicolas; Goudail, François; Plassart, Corentin; Boffety, Matthieu; Feneyrou, Patrick; Leviandier, Luc; Galland, Frédéric; Bertaux, Nicolas

    2016-05-01

    We describe an active polarimetric imager with laser illumination at 1.5 µm that can generate any illumination and analysis polarization state on the Poincar sphere. Thanks to its full polarization agility and to image analysis of the scene with an ultrafast active-contour based segmentation algorithm, it can perform adaptive polarimetric contrast optimization. We demonstrate the capacity of this imager to detect manufactured objects in different types of environments for such applications as decamouflage and hazardous object detection. We compare two imaging modes having different number of polarimetric degrees of freedom and underline the characteristics that a polarimetric imager aimed at this type of applications should possess.

  2. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  3. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  4. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.

  5. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  6. A quantum mechanics-based algorithm for vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2016-06-01

    Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.

  7. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  8. Bilayered anatomically constrained split-and-merge expectation maximisation algorithm (BiASM) for brain segmentation

    NASA Astrophysics Data System (ADS)

    Sudre, Carole H.; Cardoso, M. Jorge; Ourselin, Sébastien

    2014-03-01

    Dealing with pathological tissues is a very challenging task in medical brain segmentation. The presence of pathology can indeed bias the ultimate results when the model chosen is not appropriate and lead to missegmentations and errors in the model parameters. Model fit and segmentation accuracy are impaired by the lack of flexibility of the model used to represent the data. In this work, based on a finite Gaussian mixture model, we dynamically introduce extra degrees of freedom so that each anatomical tissue considered is modelled as a mixture of Gaussian components. The choice of the appropriate number of components per tissue class relies on a model selection criterion. Its purpose is to balance the complexity of the model with the quality of the model fit in order to avoid overfitting while allowing flexibility. The parameters optimisation, constrained with the additional knowledge brought by probabilistic anatomical atlases, follows the expectation maximisation (EM) framework. Split-and-merge operations bring the new flexibility to the model along with a data-driven adaptation. The proposed methodology appears to improve the segmentation when pathological tissue are present as well as the model fit when compared to an atlas-based expectation maximisation algorithm with a unique component per tissue class. These improvements in the modelling might bring new insight in the characterisation of pathological tissues as well as in the modelling of partial volume effect.

  9. SAR Image Segmentation with Unknown Number of Classes Combined Voronoi Tessellation and Rjmcmc Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Q. H.; Li, Y.; Wang, Y.

    2016-06-01

    This paper presents a novel segmentation method for automatically determining the number of classes in Synthetic Aperture Radar (SAR) images by combining Voronoi tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC) strategy. Instead of giving the number of classes a priori, it is considered as a random variable and subject to a Poisson distribution. Based on Voronoi tessellation, the image is divided into homogeneous polygons. By Bayesian paradigm, a posterior distribution which characterizes the segmentation and model parameters conditional on a given SAR image can be obtained up to a normalizing constant; Then, a Revisable Jump Markov Chain Monte Carlo(RJMCMC) algorithm involving six move types is designed to simulate the posterior distribution, the move types including: splitting or merging real classes, updating parameter vector, updating label field, moving positions of generating points, birth or death of generating points and birth or death of an empty class. Experimental results with real and simulated SAR images demonstrate that the proposed method can determine the number of classes automatically and segment homogeneous regions well.

  10. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  11. The algorithm study for using the back propagation neural network in CT image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi

    2017-01-01

    Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.

  12. Segmenting clouds from space : a hybrid multispectral classification algorithm for satellite imagery.

    SciTech Connect

    Post, Brian Nelson; Wilson, Mark P.; Smith, Jody Lynn; Wehlburg, Joseph Cornelius; Nandy, Prabal

    2005-07-01

    This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 um, to long wave infra-red (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one mid-wave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.

  13. Larynx Anatomy

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Larynx Anatomy Add to My Pictures View /Download : Small: ... 1350x1200 View Download Large: 2700x2400 View Download Title: Larynx Anatomy Description: Anatomy of the larynx; drawing shows ...

  14. Vulva Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Vulva Anatomy Add to My Pictures View /Download : Small: 720x634 ... View Download Large: 3000x2640 View Download Title: Vulva Anatomy Description: Anatomy of the vulva; drawing shows the ...

  15. Hand Anatomy

    MedlinePlus

    ... Home Anatomy Bones Joints Muscles Nerves Vessels Tendons Anatomy The upper extremity is a term used to ... of the parts together. Learn more about the anatomy of the upper extremity using the links in ...

  16. Pharynx Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Pharynx Anatomy Add to My Pictures View /Download : Small: 720x576 ... View Download Large: 3000x2400 View Download Title: Pharynx Anatomy Description: Anatomy of the pharynx; drawing shows the ...

  17. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  18. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  19. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering.

    PubMed

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM).

  20. An efficient algorithm for multiphase image segmentation with intensity bias correction.

    PubMed

    Zhang, Haili; Ye, Xiaojing; Chen, Yunmei

    2013-10-01

    This paper presents a variational model for simultaneous multiphase segmentation and intensity bias estimation for images corrupted by strong noise and intensity inhomogeneity. Since the pixel intensities are not reliable samples for region statistics due to the presence of noise and intensity bias, we use local information based on the joint density within image patches to perform image partition. Hence, the pixel intensity has a multiplicative distribution structure. Then, the maximum-a-posteriori (MAP) principle with those pixel density functions generates the model. To tackle the computational problem of the resultant nonsmooth nonconvex minimization, we relax the constraint on the characteristic functions of partition regions, and apply primal-dual alternating gradient projections to construct a very efficient numerical algorithm. We show that all the variables have closed-form solutions in each iteration, and the computation complexity is very low. In particular, the algorithm involves only regular convolutions and pointwise projections onto the unit ball and canonical simplex. Numerical tests on a variety of images demonstrate that the proposed algorithm is robust, stable, and attains significant improvements on accuracy and efficiency over the state-of-the-arts.

  1. A two-dimensional Segmented Boundary Algorithm for complex moving solid boundaries in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Khorasanizade, Sh.; Sousa, J. M. M.

    2016-03-01

    A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.

  2. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus.

  3. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  4. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  5. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  6. Enhancing a diffusion algorithm for 4D image segmentation using local information

    NASA Astrophysics Data System (ADS)

    Lösel, Philipp; Heuveline, Vincent

    2016-03-01

    Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.

  7. Genetic algorithms as a useful tool for trabecular and cortical bone segmentation.

    PubMed

    Janc, K; Tarasiuk, J; Bonnet, A S; Lipinski, P

    2013-07-01

    The aim of this study was to find a semi-automatic method of bone segmentation on the basis of computed tomography (CT) scan series in order to recreate corresponding 3D objects. So, it was crucial for the segmentation to be smooth between adjacent scans. The concept of graphics pipeline computing was used, i.e. simple graphics filters such as threshold or gradient were processed in a manner that the output of one filter became the input of the second one resulting in so called pipeline. The input of the entire stream was the CT scan and the output corresponded to the binary mask showing where a given tissue is located in the input image. In this approach the main task consists in finding the suitable sequence, types and parameters of graphics filters building the pipeline. Because of the high number of desired parameters (in our case 96), it was decided to use a slightly modified genetic algorithm. To determine fitness value, the mask obtained from the parameters found through genetic algorithms (GA) was compared with those manually prepared. The numerical value corresponding to such a comparison has been defined by Dice's coefficient. Preparation of reference masks for a few scans among the several hundreds of them was the only action done manually by a human expert. Using this method, very good results both for trabecular and cortical bones were obtained. It has to be emphasized that as no real border exists between these two bone types, the manually prepared reference masks were quite conventional and therefore charged with errors. As GA is a non-deterministic method, the present work also contains a statistical analysis of the relations existing between various GA parameters and fitness function. Finally the best sets of the GA parameters are proposed.

  8. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  9. An Iris Segmentation Algorithm based on Edge Orientation for Off-angle Iris Recognition

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  10. An iris segmentation algorithm based on edge orientation for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Boehnen, Christopher

    2013-03-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  11. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  12. PHEW: a parallel segmentation algorithm for three-dimensional AMR datasets. Application to structure detection in self-gravitating flows

    NASA Astrophysics Data System (ADS)

    Bleuler, Andreas; Teyssier, Romain; Carassou, Sébastien; Martizzi, Davide

    2015-06-01

    We introduce phew ( Parallel Hi Erarchical Watershed), a new segmentation algorithm to detect structures in astrophysical fluid simulations, and its implementation into the adaptive mesh refinement (AMR) code ramses. phew works on the density field defined on the adaptive mesh, and can thus be used on the gas density or the dark matter density after a projection of the particles onto the grid. The algorithm is based on a `watershed' segmentation of the computational volume into dense regions, followed by a merging of the segmented patches based on the saddle point topology of the density field. phew is capable of automatically detecting connected regions above the adopted density threshold, as well as the entire set of substructures within. Our algorithm is fully parallel and uses the MPI library. We describe in great detail the parallel algorithm and perform a scaling experiment which proves the capability of phew to run efficiently on massively parallel systems. Future work will add a particle unbinding procedure and the calculation of halo properties onto our segmentation algorithm, thus expanding the scope of phew to genuine halo finding.

  13. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  14. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  15. Streaming level set algorithm for 3D segmentation of confocal microscopy images.

    PubMed

    Gouaillard, Alexandre; Mosaliganti, Kishore; Gelas, Arnaud; Souhait, Lydie; Obholzer, Nikolaus; Megason, Sean

    2009-01-01

    We present a high performance variant of the popular geodesic active contours which are used for splitting cell clusters in microscopy images. Previously, we implemented a linear pipelined version that incorporates as many cues as possible into developing a suitable level-set speed function so that an evolving contour exactly segments a cell/nuclei blob. We use image gradients, distance maps, multiple channel information and a shape model to drive the evolution. We also developed a dedicated seeding strategy that uses the spatial coherency of the data to generate an over complete set of seeds along with a quality metric which is further used to sort out which seed should be used for a given cell. However, the computational performance of any level-set methodology is quite poor when applied to thousands of 3D data-sets each containing thousands of cells. Those data-sets are common in confocal microscopy. In this work, we explore methods to stream the algorithm in shared memory, multi-core environments. By partitioning the input and output using spatial data structures we insure the spatial coherency needed by our seeding algorithm as well as improve drastically the speed without memory overhead. Our results show speed-ups up to a factor of six.

  16. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  17. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  18. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  19. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    SciTech Connect

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  20. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    PubMed

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  1. A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image.

    PubMed

    Ji, Ze-Xuan; Sun, Quan-Sen; Xia, De-Shen

    2011-07-01

    A modified possibilistic fuzzy c-means clustering algorithm is presented for fuzzy segmentation of magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities and noise. By introducing a novel adaptive method to compute the weights of local spatial in the objective function, the new adaptive fuzzy clustering algorithm is capable of utilizing local contextual information to impose local spatial continuity, thus allowing the suppression of noise and helping to resolve classification ambiguity. To estimate the intensity inhomogeneity, the global intensity is introduced into the coherent local intensity clustering algorithm and takes the local and global intensity information into account. The segmentation target therefore is driven by two forces to smooth the derived optimal bias field and improve the accuracy of the segmentation task. The proposed method has been successfully applied to 3 T, 7 T, synthetic and real MR images with desirable results. Comparisons with other approaches demonstrate the superior performance of the proposed algorithm. Moreover, the proposed algorithm is robust to initialization, thereby allowing fully automatic applications.

  2. Metal Artifact Reduction and Segmentation of Dental Computerized Tomography Images Using Least Square Support Vector Machine and Mean Shift Algorithm.

    PubMed

    Mortaheb, Parinaz; Rezaeian, Mehdi

    2016-01-01

    Segmentation and three-dimensional (3D) visualization of teeth in dental computerized tomography (CT) images are of dentists' requirements for both abnormalities diagnosis and the treatments such as dental implant and orthodontic planning. On the other hand, dental CT image segmentation is a difficult process because of the specific characteristics of the tooth's structure. This paper presents a method for automatic segmentation of dental CT images. We present a multi-step method, which starts with a preprocessing phase to reduce the metal artifact using the least square support vector machine. Integral intensity profile is then applied to detect each tooth's region candidates. Finally, the mean shift algorithm is used to partition the region of each tooth, and all these segmented slices are then applied for 3D visualization of teeth. Examining the performance of our proposed approach, a set of reliable assessment metrics is utilized. We applied the segmentation method on 14 cone-beam CT datasets. Functionality analysis of the proposed method demonstrated precise segmentation results on different sample slices. Accuracy analysis of the proposed method indicates that we can increase the sensitivity, specificity, precision, and accuracy of the segmentation results by 83.24%, 98.35%, 72.77%, and 97.62% and decrease the error rate by 2.34%. The experimental results show that the proposed approach performs well on different types of CT images and has better performance than all existing approaches. Moreover, segmentation results can be more accurate by using the proposed algorithm of metal artifact reduction in the preprocessing phase.

  3. Phasing the segments of the Keck and Thirty Meter Telescopes via the narrowband phasing algorithm: chromatic effects

    NASA Astrophysics Data System (ADS)

    Chanan, Gary; Troy, Mitchell; Raouf, Nasrat

    2016-07-01

    The narrowband phasing algorithm that was originally developed at Keck has largely been replaced by a broad- band algorithm that, although it is slower and less accurate than the former, has proved to be much more robust. A systematic investigation into the lack of robustness of the narrowband algorithm has shown that it results from systematic errors (of order 20 nm) that are wavelength-dependent. These errors are not well-understood at present, but they do not appear to arise from instrumental effects in the Keck phasing cameras, or from the segment coatings. This leaves high spatial frequency aberrations or scattering within 60 mm of the segment edges as the most likely origin of the effect.

  4. Local Area Signal-to-Noise Ratio (LASNR) algorithm for Image Segmentation

    SciTech Connect

    Kegelmeyer, L; Fong, P; Glenn, S; Liebman, J

    2007-07-03

    Many automated image-based applications have need of finding small spots in a variably noisy image. For humans, it is relatively easy to distinguish objects from local surroundings no matter what else may be in the image. We attempt to capture this distinguishing capability computationally by calculating a measurement that estimates the strength of signal within an object versus the noise in its local neighborhood. First, we hypothesize various sizes for the object and corresponding background areas. Then, we compute the Local Area Signal to Noise Ratio (LASNR) at every pixel in the image, resulting in a new image with LASNR values for each pixel. All pixels exceeding a pre-selected LASNR value become seed pixels, or initiation points, and are grown to include the full area extent of the object. Since growing the seed is a separate operation from finding the seed, each object can be any size and shape. Thus, the overall process is a 2-stage segmentation method that first finds object seeds and then grows them to find the full extent of the object. This algorithm was designed, optimized and is in daily use for the accurate and rapid inspection of optics from a large laser system (National Ignition Facility (NIF), Lawrence Livermore National Laboratory, Livermore, CA), which includes images with background noise, ghost reflections, different illumination and other sources of variation.

  5. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az = 0.9502 over a training set of 40 images and Az = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  6. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm.

    PubMed

    Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Aviña-Cervantes, Juan Gabriel; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az ) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az = 0.9502 over a training set of 40 images and Az = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms.

  7. Eye Anatomy

    MedlinePlus

    ... News About Us Donate In This Section Eye Anatomy en Español email Send this article to a ... You at Risk For Glaucoma? Childhood Glaucoma Eye Anatomy Five Common Glaucoma Tests Glaucoma Facts and Stats ...

  8. Tooth anatomy

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/002214.htm Tooth anatomy To use the sharing features on this page, ... upper jawbone is called the maxilla. Images Tooth anatomy References Lingen MW. Head and neck. In: Kumar ...

  9. Correlative anatomy for thoracic inlet; glottis and subglottis; trachea, carina, and main bronchi; lobes, fissures, and segments; hilum and pulmonary vascular system; bronchial arteries and lymphatics.

    PubMed

    Ugalde, Paula; Miro, Santiago; Fréchette, Eric; Deslauriers, Jean

    2007-11-01

    Because it is relatively inexpensive and universally available, standard radiographs of the thorax should still be viewed as the primary screening technique to look at the anatomy of intrathoracic structures and to investigate airway or pulmonary disorders. Modern trained thoracic surgeons must be able to correlate surgical anatomy with what is seen on more advanced imaging techniques, however, such as CT or MRI. More importantly, they must be able to recognize the indications, capabilities, limitations, and pitfalls of these imaging methods.

  10. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    PubMed

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; Housden, R James; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges.

  11. Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin

    2012-03-01

    We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.

  12. Anatomy atlases.

    PubMed

    Rosse, C

    1999-01-01

    Anatomy atlases are unlike other knowledge sources in the health sciences in that they communicate knowledge through annotated images without the support of narrative text. An analysis of the knowledge component represented by images and the history of anatomy atlases suggest some distinctions that should be made between atlas and textbook illustrations. Textbook and atlas should synergistically promote the generation of a mental model of anatomy. The objective of such a model is to support anatomical reasoning and thereby replace memorization of anatomical facts. Criteria are suggested for selecting anatomy texts and atlases that complement one another, and the advantages and disadvantages of hard copy and computer-based anatomy atlases are considered.

  13. An Automatic Algorithm for Segmentation of the Boundaries of Corneal Layers in Optical Coherence Tomography Images using Gaussian Mixture Model

    PubMed Central

    Jahromi, Mahdi Kazemian; Kafieh, Raheleh; Rabbani, Hossein; Dehnavi, Alireza Mehri; Peyman, Alireza; Hajizadeh, Fedra; Ommani, Mohammadreza

    2014-01-01

    Diagnosis of corneal diseases is possible by measuring and evaluation of corneal thickness in different layers. Thus, the need for precise segmentation of corneal layer boundaries is inevitable. Obviously, manual segmentation is time-consuming and imprecise. In this paper, the Gaussian mixture model (GMM) is used for automatic segmentation of three clinically important corneal boundaries on optical coherence tomography (OCT) images. For this purpose, we apply the GMM method in two consequent steps. In the first step, the GMM is applied on the original image to localize the first and the last boundaries. In the next step, gradient response of a contrast enhanced version of the image is fed into another GMM algorithm to obtain a more clear result around the second boundary. Finally, the first boundary is traced toward down to localize the exact location of the second boundary. We tested the performance of the algorithm on images taken from a Heidelberg OCT imaging system. To evaluate our approach, the automatic boundary results are compared with the boundaries that have been segmented manually by two corneal specialists. The quantitative results show that the proposed method segments the desired boundaries with a great accuracy. Unsigned mean errors between the results of the proposed method and the manual segmentation are 0.332, 0.421, and 0.795 for detection of epithelium, Bowman, and endothelium boundaries, respectively. Unsigned mean errors of the inter-observer between two corneal specialists have also a comparable unsigned value of 0.330, 0.398, and 0.534, respectively. PMID:25298926

  14. Electron Conformal Radiotherapy for Post-Mastectomy Irradiation: A Bolus-Free, Multi-Energy, Multi-Segmented Field Algorithm

    DTIC Science & Technology

    2005-08-01

    that compared to customized electron bolu s radiotherapy for post-mastectomy irradiation, ECT with multi-energy, multi-segmente d treatment fields has...PTV dos e homogeneity was quite good . Use of the treatment plan modification techniques improved dose sparin g for the non-target portion of the...phantom . For the patient treatment plans, the algorithm provided acceptable results for PTV conformality and dose homogeneity, in comparison to the bolus

  15. Memory based active contour algorithm using pixel-level classified images for colon crypt segmentation.

    PubMed

    Cohen, Assaf; Rivlin, Ehud; Shimshoni, Ilan; Sabo, Edmond

    2015-07-01

    In this paper, we introduce a novel method for detection and segmentation of crypts in colon biopsies. Most of the approaches proposed in the literature try to segment the crypts using only the biopsy image without understanding the meaning of each pixel. The proposed method differs in that we segment the crypts using an automatically generated pixel-level classification image of the original biopsy image and handle the artifacts due to the sectioning process and variance in color, shape and size of the crypts. The biopsy image pixels are classified to nuclei, immune system, lumen, cytoplasm, stroma and goblet cells. The crypts are then segmented using a novel active contour approach, where the external force is determined by the semantics of each pixel and the model of the crypt. The active contour is applied for every lumen candidate detected using the pixel-level classification. Finally, a false positive crypt elimination process is performed to remove segmentation errors. This is done by measuring their adherence to the crypt model using the pixel level classification results. The method was tested on 54 biopsy images containing 4944 healthy and 2236 cancerous crypts, resulting in 87% detection of the crypts with 9% of false positive segments (segments that do not represent a crypt). The segmentation accuracy of the true positive segments is 96%.

  16. Evaluation of an algorithm for semiautomated segmentation of thin tissue layers in high-frequency ultrasound images.

    PubMed

    Qiu, Qiang; Dunmore-Buyze, Joy; Boughner, Derek R; Lacefield, James C

    2006-02-01

    An algorithm consisting of speckle reduction by median filtering, contrast enhancement using top- and bottom-hat morphological filters, and segmentation with a discrete dynamic contour (DDC) model was implemented for nondestructive measurements of soft tissue layer thickness. Algorithm performance was evaluated by segmenting simulated images of three-layer phantoms and high-frequency (40 MHz) ultrasound images of porcine aortic valve cusps in vitro. The simulations demonstrated the necessity of the median and morphological filtering steps and enabled testing of user-specified parameters of the morphological filters and DDC model. In the experiments, six cusps were imaged in coronary perfusion solution (CPS) then in distilled water to test the algorithm's sensitivity to changes in the dimensions of thin tissue layers. Significant increases in the thickness of the fibrosa, spongiosa, and ventricularis layers, by 53.5% (p < 0.001), 88.5% (p < 0.001), and 35.1% (p = 0.033), respectively, were observed when the specimens were submerged in water. The intraobserver coefficient of variation of repeated thickness estimates ranged from 0.044 for the fibrosa in water to 0.164 for the spongiosa in CPS. Segmentation accuracy and variability depended on the thickness and contrast of the layers, but the modest variability provides confidence in the thickness measurements.

  17. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  18. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  19. Development, Implementation and Evaluation of Segmentation Algorithms for the Automatic Classification of Cervical Cells

    NASA Astrophysics Data System (ADS)

    Macaulay, Calum Eric

    Cancer of the uterine cervix is one of the most common cancers in women. An effective screening program for pre-cancerous and cancerous lesions can dramatically reduce the mortality rate for this disease. In British Columbia where such a screening program has been in place for some time, 2500 to 3000 slides of cervical smears need to be examined daily. More than 35 years ago, it was recognized that an automated pre-screening system could greatly assist people in this task. Such a system would need to find and recognize stained cells, segment the images of these cells into nucleus and cytoplasm, numerically describe the characteristics of the cells, and use these features to discriminate between normal and abnormal cells. The thrust of this work was (1) to research and develop new segmentation methods and compare their performance to those in the literature, (2) to determine dependence of the numerical cell descriptors on the segmentation method used, (3) to determine the dependence of cell classification accuracy on the segmentation used, and (4) to test the hypothesis that using numerical cell descriptors one can correctly classify the cells. The segmentation accuracies of 32 different segmentation procedures were examined. It was found that the best nuclear segmentation procedure was able to correctly segment 98% of the nuclei of a 1000 and a 3680 image database. Similarly the best cytoplasmic segmentation procedure was found to correctly segment 98.5% of the cytoplasm of the same 1000 image database. Sixty-seven different numerical cell descriptors (features) were calculated for every segmented cell. On a database of 800 classified cervical cells these features when used in a linear discriminant function analysis could correctly classify 98.7% of the normal cells and 97.0% of the abnormal cells. While some features were found to vary a great deal between segmentation procedures, the classification accuracy of groups of features was found to be independent of the

  20. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation

    PubMed Central

    Rabbani, Hossein; Kazemian Jahromi, Mahdi; Jorjandi, Sahar; Mehri Dehnavi, Alireza; Hajizadeh, Fedra; Peyman, Alireza

    2016-01-01

    Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy. PMID:27247559

  1. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation.

    PubMed

    Rabbani, Hossein; Kafieh, Rahele; Kazemian Jahromi, Mahdi; Jorjandi, Sahar; Mehri Dehnavi, Alireza; Hajizadeh, Fedra; Peyman, Alireza

    2016-01-01

    Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy.

  2. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems.

    PubMed

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

  3. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  4. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  5. Integer anatomy

    SciTech Connect

    Doolittle, R.

    1994-11-15

    The title integer anatomy is intended to convey the idea of a systematic method for displaying the prime decomposition of the integers. Just as the biological study of anatomy does not teach us all things about behavior of species neither would we expect to learn everything about the number theory from a study of its anatomy. But, some number-theoretic theorems are illustrated by inspection of integer anatomy, which tend to validate the underlying structure and the form as developed and displayed in this treatise. The first statement to be made in this development is: the way structure of the natural numbers is displayed depends upon the allowed operations.

  6. A fully-automatic locally adaptive thresholding algorithm for blood vessel segmentation in 3D digital subtraction angiography.

    PubMed

    Boegel, Marco; Hoelter, Philip; Redel, Thomas; Maier, Andreas; Hornegger, Joachim; Doerfler, Arnd

    2015-01-01

    Subarachnoid hemorrhage due to a ruptured cerebral aneurysm is still a devastating disease. Planning of endovascular aneurysm therapy is increasingly based on hemodynamic simulations necessitating reliable vessel segmentation and accurate assessment of vessel diameters. In this work, we propose a fully-automatic, locally adaptive, gradient-based thresholding algorithm. Our approach consists of two steps. First, we estimate the parameters of a global thresholding algorithm using an iterative process. Then, a locally adaptive version of the approach is applied using the estimated parameters. We evaluated both methods on 8 clinical 3D DSA cases. Additionally, we propose a way to select a reference segmentation based on 2D DSA measurements. For large vessels such as the internal carotid artery, our results show very high sensitivity (97.4%), precision (98.7%) and Dice-coefficient (98.0%) with our reference segmentation. Similar results (sensitivity: 95.7%, precision: 88.9% and Dice-coefficient: 90.7%) are achieved for smaller vessels of approximately 1mm diameter.

  7. Syntactic Algorithms for Image Segmentation and a Special Computer Architecture for Image Processing

    DTIC Science & Technology

    1977-12-01

    Experimental Results of image Segmentation from FLIR ( Forword Looking Infrared) Images . ...... . . . . . . . 1115 4.3.1 Data Acquisition System of...of a picture. Concerning the computer processing time in- volved In image segmentation, the grey level histogram thresholding approach is quite fast ...computer storage and the CPU time for each matching operation. The syntax- controlled method has the advantage of fast computer processing time for

  8. Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Peng, Zhigang; Liao, Shu; Shinagawa, Yoshihisa; Zhan, Yiqiang; Hermosillo, Gerardo; Zhou, Xiang Sean

    2014-03-01

    Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.

  9. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  10. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.

  11. Intracranial Arteries - Anatomy and Collaterals.

    PubMed

    Liebeskind, David S; Caplan, Louis R

    2016-01-01

    Anatomy, physiology, and pathophysiology are inextricably linked in patients with intracranial atherosclerosis. Knowledge of abnormal or pathological conditions such as intracranial atherosclerosis stems from detailed recognition of the normal pattern of vascular anatomy. The vascular anatomy of the intracranial arteries, both at the level of the vessel wall and as a larger structure or conduit, is a reflection of physiology over time, from in utero stages through adult life. The unique characteristics of arteries at the base of the brain may help our understanding of atherosclerotic lesions that tend to afflict specific arterial segments. Although much of the knowledge regarding intracranial arteries originates from pathology and angiography series over several centuries, evolving noninvasive techniques have rapidly expanded our perspective. As each imaging modality provides a depiction that combines anatomy and flow physiology, it is important to interpret each image with a solid understanding of typical arterial anatomy and corresponding collateral routes. Compensatory collateral perfusion and downstream flow status have recently emerged as pivotal variables in the clinical management of patients with atherosclerosis. Ongoing studies that illustrate the anatomy and pathophysiology of these proximal arterial segments across modalities will help refine our knowledge of the interplay between vascular anatomy and cerebral blood flow. Future studies may help elucidate pivotal arterial factors far beyond the degree of stenosis, examining downstream influences on cerebral perfusion, artery-to-artery thromboembolic potential, amenability to endovascular therapies and stent conformation, and the propensity for restenosis due to biophysical factors.

  12. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  13. Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images

    NASA Astrophysics Data System (ADS)

    Price, Gareth; Moore, Chris

    2007-03-01

    Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy and correct target identification is a necessity. Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such techniques can be applied to organ at risk and tumour segmentation in radiotherapy. In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent, especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape model, provides guidance. A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes ) prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer variability.

  14. Phasing the mirror segments of the Keck telescopes: the broadband phasing algorithm.

    PubMed

    Chanan, G; Troy, M; Dekens, F; Michaels, S; Nelson, J; Mast, T; Kirkman, D

    1998-01-01

    To achieve its full diffraction limit in the infrared, the primary mirror of the Keck telescope (now telescopes) must be properly phased: The steps or piston errors between the individual mirror segments must be reduced to less than 100 nm. We accomplish this with a wave optics variation of the Shack-Hartmann test, in which the signal is not the centroid but rather the degree of coherence of the individual subimages. Using filters with a variety of coherence lengths, we can capture segments with initial piston errors as large as +/-30 microm and reduce these to 30 nm--a dynamic range of 3 orders of magnitude. Segment aberrations contribute substantially to the residual errors of approximately 75 nm.

  15. Spatial Patterns of Trees from Airborne LiDAR Using a Simple Tree Segmentation Algorithm

    NASA Astrophysics Data System (ADS)

    Jeronimo, S.; Kane, V. R.; McGaughey, R. J.; Franklin, J. F.

    2015-12-01

    Objectives for management of forest ecosystems on public land incorporate a focus on maintenance and restoration of ecological functions through silvicultural manipulation of forest structure. The spatial pattern of residual trees - the horizontal element of structure - is a key component of ecological restoration prescriptions. We tested the ability of a simple LiDAR individual tree segmentation method - the watershed transform - to generate spatial pattern metrics similar to those obtained by the traditional method - ground-based stem mapping - on forested plots representing the structural diversity of a large wilderness area (Yosemite NP) and a large managed area (Sierra NF) in the Sierra Nevada, Calif. Most understory and intermediate-canopy trees were not detected by the LiDAR segmentation; however, LiDAR- and field-based assessments of spatial pattern in terms of tree clump size distributions largely agreed. This suggests that (1) even when individual tree segmentation is not effective for tree density estimates, it can provide a good measurement of tree spatial pattern, and (2) a simple segmentation method is adequate to measure spatial pattern of large areas with a diversity of structural characteristics. These results lay the groundwork for a LiDAR tool to assess clumping patterns across forest landscapes in support of restoration silviculture. This tool could describe spatial patterns of functionally intact reference ecosystems, measure departure from reference targets in treatment areas, and, with successive acquisitions, monitor treatment efficacy.

  16. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    results for precision, recall, and F-measure indicate that the best approach to use for image segmentation is Sobel edge detection and to use Canny...or Sobel for object recognition. The process for this report would not work for a warfighter or analyst. It has poor performance. Additionally...1 2.1. Sobel Edge Detection

  17. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  18. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm

    PubMed Central

    Abdullah, Muhammad; Barman, Sarah A.

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. PMID:27190713

  19. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  20. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm.

    PubMed

    Abdullah, Muhammad; Fraz, Muhammad Moazam; Barman, Sarah A

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc.

  1. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation

    NASA Astrophysics Data System (ADS)

    Maass, Bolko

    2016-12-01

    This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.

  2. Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.

    2014-03-01

    Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.

  3. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery.

  4. High-resolution CISS MR imaging with and without contrast for evaluation of the upper cranial nerves: segmental anatomy and selected pathologic conditions of the cisternal through extraforaminal segments.

    PubMed

    Blitz, Ari M; Macedo, Leonardo L; Chonka, Zachary D; Ilica, Ahmet T; Choudhri, Asim F; Gallia, Gary L; Aygun, Nafi

    2014-02-01

    The authors review the course and appearance of the major segments of the upper cranial nerves from their apparent origin at the brainstem through the proximal extraforaminal region, focusing on the imaging and anatomic features of particular relevance to high-resolution magnetic resonance imaging evaluation. Selected pathologic entities are included in the discussion of the corresponding cranial nerve segments for illustrative purposes.

  5. Segmentation of blood clot from CT pulmonary angiographic images using a modified seeded region growing algorithm method

    NASA Astrophysics Data System (ADS)

    Park, Bumwoo; Furlan, Alessandro; Patil, Amol; Bae, Kyongtae T.

    2010-03-01

    Pulmonary embolism (PE) is a medical condition defined as the obstruction of pulmonary arteries by a blood clot, usually originating in the deep veins of the lower limbs. PE is a common but elusive illness that can cause significant disability and death if not promptly diagnosed and effectively treated. CT Pulmonary Angiography (CTPA) is the first line imaging study for the diagnosis of PE. While clinical prediction rules have been recently developed to associate short-term risks and stratify patients with acute PE, there is a dearth of objective biomarkers associated with the long-term prognosis of the disease. Clot (embolus) burden is a promising biomarker for the prognosis and recurrence of PE and can be quantified from CTPA images. However, to our knowledge, no study has reported a method for segmentation and measurement of clot from CTPA images. Thus, the purpose of this study was to develop a semi-automated method for segmentation and measurement of clot from CTPA images. Our method was based on Modified Seeded Region Growing (MSRG) algorithm which consisted of two steps: (1) the observer identifies a clot of interest on CTPA images and places a spherical seed over the clot; and (2) a region grows around the seed on the basis of a rolling-ball process that clusters the neighboring voxels whose CT attenuation values are within the range of the mean +/- two standard deviations of the initial seed voxels. The rollingball propagates iteratively until the clot is completely clustered and segmented. Our experimental results revealed that the performance of the MSRG was superior to that of the conventional SRG for segmenting clots, as evidenced by reduced degrees of over- or under-segmentation from adjacent anatomical structures. To assess the clinical value of clot burden for the prognosis of PE, we are currently applying the MSRG for the segmentation and volume measurement of clots from CTPA images that are acquired in a large cohort of patients with PE in an on

  6. A effective immune multi-objective algorithm for SAR imagery segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Dongdong; Jiao, Licheng; Gong, Maoguo; Si, Xiaoyun; Li, Jinji; Feng, Jie

    2009-10-01

    A novel and effective immune multi-objective clustering algorithm (IMCA) is presented in this study. Two conflicting and complementary objectives, called compactness and connectedness of clusters, are employed as optimization targets. Besides, adaptive ranks clone, variable length chromosome crossover operation and k-nearest neighboring list based diversity holding strategies are featured by the algorithm. IMCA could automatically discover the right number of clusters with large probability. Seven complicated artificial data sets and two widely used synthetic aperture radar (SAR) imageries are used for test IMCA. Compared with FCM and VGA, IMCA has obtained good and encouraging clustering results. We believe that IMCA is an effective algorithm for solving these nine problems, which should deserve further research.

  7. The backtracking search optimization algorithm for frequency band and time segment selection in motor imagery-based brain-computer interfaces.

    PubMed

    Wei, Zhonghai; Wei, Qingguo

    2016-09-01

    Common spatial pattern (CSP) is a powerful algorithm for extracting discriminative brain patterns in motor imagery-based brain-computer interfaces (BCIs). However, its performance depends largely on the subject-specific frequency band and time segment. Accurate selection of most responsive frequency band and time segment remains a crucial problem. A novel evolutionary algorithm, the backtracking search optimization algorithm is used to find the optimal frequency band and the optimal combination of frequency band and time segment. The former is searched by a frequency window with changing width of which starting and ending points are selected by the backtracking optimization algorithm; the latter is searched by the same frequency window and an additional time window with fixed width. The three parameters, the starting and ending points of frequency window and the starting point of time window, are jointly optimized by the backtracking search optimization algorithm. Based on the chosen frequency band and fixed or chosen time segment, the same feature extraction is conducted by CSP and subsequent classification is carried out by Fisher discriminant analysis. The classification error rate is used as the objective function of the backtracking search optimization algorithm. The two methods, named BSA-F CSP and BSA-FT CSP, were evaluated on data set of BCI competition and compared with traditional wideband (8-30[Formula: see text]Hz) CSP. The classification results showed that backtracking search optimization algorithm can find much effective frequency band for EEG preprocessing compared to traditional broadband, substantially enhancing CSP performance in terms of classification accuracy. On the other hand, the backtracking search optimization algorithm for joint selection of frequency band and time segment can find their optimal combination, and thus can further improve classification rates.

  8. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    SciTech Connect

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-01

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinate system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.

  9. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    DOE PAGES

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less

  10. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    SciTech Connect

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinate system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.

  11. Implementation of a cellular neural network-based segmentation algorithm on the bio-inspired vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Grassi, Giuseppe; Vecchio, Pietro; Arik, Sabri; Yalcin, M. Erhan

    2011-01-01

    Based on the cellular neural network (CNN) paradigm, the bio-inspired (bi-i) cellular vision system is a computing platform consisting of state-of-the-art sensing, cellular sensing-processing and digital signal processing. This paper presents the implementation of a novel CNN-based segmentation algorithm onto the bi-i system. The experimental results, carried out for different benchmark video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frame/sec. Comparisons with existing CNN-based methods show that, even though these methods are from two to six times faster than the proposed one, the conceived approach is more accurate and, consequently, represents a satisfying trade-off between real-time requirements and accuracy.

  12. Reproducibility of SD-OCT–Based Ganglion Cell–Layer Thickness in Glaucoma Using Two Different Segmentation Algorithms

    PubMed Central

    Garvin, Mona K.; Lee, Kyungmoo; Burns, Trudy L.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.

    2013-01-01

    Purpose. To compare the reproducibility of spectral-domain optical coherence tomography (SD-OCT)–based ganglion cell–layer-plus-inner plexiform–layer (GCL+IPL) thickness measurements for glaucoma patients obtained using both a publicly available and a commercially available algorithm. Methods. Macula SD-OCT volumes (200 × 200 × 1024 voxels, 6 × 6 × 2 mm3) were obtained prospectively from both eyes of patients with open-angle glaucoma or with suspected glaucoma on two separate visits within 4 months. The combined GCL+IPL thickness was computed for each SD-OCT volume within an elliptical annulus centered at the fovea, based on two algorithms: (1) a previously published graph-theoretical layer segmentation approach developed at the University of Iowa, and (2) a ganglion cell analysis module of version 6 of Cirrus software. The mean overall thickness of the elliptical annulus was computed as was the thickness within six sectors. For statistical analyses, eyes with an SD-OCT volume with low signal strength (<6), image acquisition errors, or errors in performing the commercial GCL+IPL analysis in at least one of the repeated acquisitions were excluded. Results. Using 104 eyes (from 56 patients) with repeated measurements, we found the intraclass correlation coefficient for the overall elliptical annular GCL+IPL thickness to be 0.98 (95% confidence interval [CI]: 0.97–0.99) with the Iowa algorithm and 0.95 (95% CI: 0.93–0.97) with the Cirrus algorithm; the intervisit SDs were 1.55 μm (Iowa) and 2.45 μm (Cirrus); and the coefficients of variation were 2.2% (Iowa) and 3.5% (Cirrus), P < 0.0001. Conclusions. SD-OCT–based GCL+IPL thickness measurements in patients with early glaucoma are highly reproducible. PMID:24045993

  13. An infared polarization image fusion method based on NSCT and fuzzy C-means clustering segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Qian, Weixian; Xu, Mengxi

    2014-11-01

    The integration between polarization and intensity images possessing complementary and discriminative information has emerged as a new and important research area. On the basis of the consideration that the resulting image has different clarity and layering requirement for the target and background, we propose a novel fusion method based on non-subsampled Contourlet transform (NSCT) and fuzzy C-means (FCM) segmentation for IR polarization and light intensity images. First, the polarization characteristic image is derived from fusion of the degree of polarization (DOP) and the angle of polarization (AOP) images using local standard variation and abrupt change degree (ACD) combined criteria. Then, the polarization characteristic image is segmented with FCM algorithm. Meanwhile, the two source images are respectively decomposed by NSCT. The regional energy-weighted and similarity measure are adopted to combine the low-frequency sub-band coefficients of the object. The high-frequency sub-band coefficients of the object boundaries are integrated through the maximum selection rule. In addition, the high-frequency sub-band coefficients of internal objects are integrated by utilizing local variation, matching measure and region feature weighting. The weighted average and maximum rules are employed independently in fusing the low-frequency and high-frequency components of the background. Finally, an inverse NSCT operation is accomplished and the final fused image is obtained. The experimental results illustrate that the proposed IR polarization image fusion algorithm can yield an improved performance in terms of the contrast between artificial target and cluttered background and a more detailed representation of the depicted scene.

  14. Segmentation and image navigation in digitized spine x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    2000-06-01

    The National Library of Medicine has archived a collection of 17,000 digitized x-rays of the cervical and lumbar spines. Extensive health information has been collected on the subjects of these x-rays, but no information has been derived from the image contents themselves. We are researching algorithms to segment anatomy in these images and to derive from the segmented data measurements useful for indexing this image set for characteristics important to researchers in rheumatology, bone morphometry, and related areas. Active Shape Modeling is currently being investigated for use in location and boundary definition for the vertebrae in these images.

  15. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  16. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  17. Paraganglioma Anatomy

    MedlinePlus

    ... carotid artery. It may also form along nerve pathways in the head and neck and in other parts of the body. Topics/Categories: Anatomy -- Nervous System Type: Color, Illustration Source: National Cancer Institute Creator: Terese Winslow (Illustrator) AV Number: CDR739011 ...

  18. Heart Anatomy

    MedlinePlus

    ... español An Incredible Machine Bonus poster (PDF) The Human Heart Anatomy Blood The Conduction System The Coronary Arteries The Heart Valves The Heartbeat Vasculature of the Arm Vasculature of the Head Vasculature of the Leg Vasculature of the Torso ...

  19. Development and evaluation of an algorithm for the computer-assisted segmentation of the human hypothalamus on 7-Tesla magnetic resonance images.

    PubMed

    Schindler, Stephanie; Schönknecht, Peter; Schmidt, Laura; Anwander, Alfred; Strauß, Maria; Trampel, Robert; Bazin, Pierre-Louis; Möller, Harald E; Hegerl, Ulrich; Turner, Robert; Geyer, Stefan

    2013-01-01

    Post mortem studies have shown volume changes of the hypothalamus in psychiatric patients. With 7T magnetic resonance imaging this effect can now be investigated in vivo in detail. To benefit from the sub-millimeter resolution requires an improved segmentation procedure. The traditional anatomical landmarks of the hypothalamus were refined using 7T T1-weighted magnetic resonance images. A detailed segmentation algorithm (unilateral hypothalamus) was developed for colour-coded, histogram-matched images, and evaluated in a sample of 10 subjects. Test-retest and inter-rater reliabilities were estimated in terms of intraclass-correlation coefficients (ICC) and Dice's coefficient (DC). The computer-assisted segmentation algorithm ensured test-retest reliabilities of ICC≥.97 (DC≥96.8) and inter-rater reliabilities of ICC≥.94 (DC = 95.2). There were no significant volume differences between the segmentation runs, raters, and hemispheres. The estimated volumes of the hypothalamus lie within the range of previous histological and neuroimaging results. We present a computer-assisted algorithm for the manual segmentation of the human hypothalamus using T1-weighted 7T magnetic resonance imaging. Providing very high test-retest and inter-rater reliabilities, it outperforms former procedures established at 1.5T and 3T magnetic resonance images and thus can serve as a gold standard for future automated procedures.

  20. The Anatomy of Learning Anatomy

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Scheja, Max; Lonka, Kirsti; Josephson, Anna

    2010-01-01

    The experience of clinical teachers as well as research results about senior medical students' understanding of basic science concepts has much been debated. To gain a better understanding about how this knowledge-transformation is managed by medical students, this work aims at investigating their ways of setting about learning anatomy.…

  1. Identification of linear features at geothermal field based on Segment Tracing Algorithm (STA) of the ALOS PALSAR data

    NASA Astrophysics Data System (ADS)

    Haeruddin; Saepuloh, A.; Heriawan, M. N.; Kubo, T.

    2016-09-01

    Indonesia has about 40% of geothermal energy resources in the world. An area with the potential geothermal energy in Indonesia is Wayang Windu located at West Java Province. The comprehensive understanding about the geothermal system in this area is indispensable for continuing the development. A geothermal system generally associated with joints or fractures and served as the paths for the geothermal fluid migrating to the surface. The fluid paths are identified by the existence of surface manifestations such as fumaroles, solfatara and the presence of alteration minerals. Therefore the analyses of the liner features to geological structures are crucial for identifying geothermal potential. Fractures or joints in the form of geological structures are associated with the linear features in the satellite images. The Segment Tracing Algorithm (STA) was used for the basis to determine the linear features. In this study, we used satellite images of ALOS PALSAR in Ascending and Descending orbit modes. The linear features obtained by satellite images could be validated by field observations. Based on the application of STA to the ALOS PALSAR data, the general direction of extracted linear features were detected in WNW-ESE, NNE-SSW and NNW-SSE. The directions are consistent with the general direction of faults system in the field. The linear features extracted from ALOS PALSAR data based on STA were very useful to identify the fractured zones at geothermal field.

  2. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation.

    PubMed

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-21

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV  = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements

  3. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    NASA Astrophysics Data System (ADS)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  4. Effect of different segmentation algorithms on metabolic tumor volume measured on 18F-FDG PET/CT of cervical primary squamous cell carcinoma

    PubMed Central

    Xu, Weina; Yu, Shupeng; Ma, Ying; Liu, Changping

    2017-01-01

    Background and purpose It is known that fluorine-18 fluorodeoxyglucose PET/computed tomography (CT) segmentation algorithms have an impact on the metabolic tumor volume (MTV). This leads to some uncertainties in PET/CT guidance of tumor radiotherapy. The aim of this study was to investigate the effect of segmentation algorithms on the PET/CT-based MTV and their correlations with the gross tumor volumes (GTVs) of cervical primary squamous cell carcinoma. Materials and methods Fifty-five patients with International Federation of Gynecology and Obstetrics stage Ia∼IIb and histologically proven cervical squamous cell carcinoma were enrolled. A fluorine-18 fluorodeoxyglucose PET/CT scan was performed before definitive surgery. GTV was measured on surgical specimens. MTVs were estimated on PET/CT scans using different segmentation algorithms, including a fixed percentage of the maximum standardized uptake value (20∼60% SUVmax) threshold and iterative adaptive algorithm. We divided all patients into four different groups according to the SUVmax within target volume. The comparisons of absolute values and percentage differences between MTVs by segmentation and GTV were performed in different SUVmax subgroups. The optimal threshold percentage was determined from MTV20%∼MTV60%, and was correlated with SUVmax. The correlation of MTViterative adaptive with GTV was also investigated. Results MTV50% and MTV60% were similar to GTV in the SUVmax up to 5 (P>0.05). MTV30%∼MTV60% were similar to GTV (P>0.05) in the 50.05) in the 100.05) in the SUVmax of at least 15 group. MTViterative adaptive was similar to GTV in both total and different SUVmax groups (P>0.05). Significant differences were observed among the fixed percentage method and the optimal threshold percentage was inversely correlated with SUVmax. The iterative adaptive segmentation algorithm led

  5. Thymus Gland Anatomy

    MedlinePlus

    ... historical Searches are case-insensitive Thymus Gland, Adult, Anatomy Add to My Pictures View /Download : Small: 720x576 ... Large: 3000x2400 View Download Title: Thymus Gland, Adult, Anatomy Description: Anatomy of the thymus gland; drawing shows ...

  6. Normal Pancreas Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Pancreas Anatomy Add to My Pictures View /Download : Small: 761x736 ... View Download Large: 3172x3068 View Download Title: Pancreas Anatomy Description: Anatomy of the pancreas; drawing shows the ...

  7. Normal Female Reproductive Anatomy

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Reproductive System, Female, Anatomy Add to My Pictures View /Download : Small: ... Reproductive System, Female, Anatomy Description: Anatomy of the female reproductive system; drawing shows the uterus, myometrium (muscular outer layer ...

  8. Repeatability and Reproducibility of Eight Macular Intra-Retinal Layer Thicknesses Determined by an Automated Segmentation Algorithm Using Two SD-OCT Instruments

    PubMed Central

    Huang, Shenghai; Leng, Lin; Zhu, Dexi; Lu, Fan

    2014-01-01

    Purpose To evaluate the repeatability, reproducibility, and agreement of thickness profile measurements of eight intra-retinal layers determined by an automated algorithm applied to optical coherence tomography (OCT) images from two different instruments. Methods Twenty normal subjects (12 males, 8 females; 24 to 32 years old) were enrolled. Imaging was performed with a custom built ultra-high resolution OCT instrument (UHR-OCT, ∼3 µm resolution) and a commercial RTVue100 OCT (∼5 µm resolution) instrument. An automated algorithm was developed to segment the macular retina into eight layers and quantitate the thickness of each layer. The right eye of each subject was imaged two times by the first examiner using each instrument to assess intra-observer repeatability and once by the second examiner to assess inter-observer reproducibility. The intraclass correlation coefficient (ICC) and coefficients of repeatability and reproducibility (COR) were analyzed to evaluate the reliability. Results The ICCs for the intra-observer repeatability and inter-observer reproducibility of both SD-OCT instruments were greater than 0.945 for the total retina and all intra-retinal layers, except the photoreceptor inner segments, which ranged from 0.051 to 0.643, and the outer segments, which ranged from 0.709 to 0.959. The CORs were less than 6.73% for the total retina and all intra-retinal layers. The total retinal thickness measured by the UHR-OCT was significantly thinner than that measured by the RTVue100. However, the ICC for agreement of the thickness profiles between UHR-OCT and RTVue OCT were greater than 0.80 except for the inner segment and outer segment layers. Conclusions Thickness measurements of the intra-retinal layers determined by the automated algorithm are reliable when applied to images acquired by the UHR-OCT and RTVue100 instruments. PMID:24505345

  9. Evaluation of current algorithms for segmentation of scar tissue from late Gadolinium enhancement cardiovascular magnetic resonance of the left atrium: an open-access grand challenge

    PubMed Central

    2013-01-01

    Background Late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging can be used to visualise regions of fibrosis and scarring in the left atrium (LA) myocardium. This can be important for treatment stratification of patients with atrial fibrillation (AF) and for assessment of treatment after radio frequency catheter ablation (RFCA). In this paper we present a standardised evaluation benchmarking framework for algorithms segmenting fibrosis and scar from LGE CMR images. The algorithms reported are the response to an open challenge that was put to the medical imaging community through an ISBI (IEEE International Symposium on Biomedical Imaging) workshop. Methods The image database consisted of 60 multicenter, multivendor LGE CMR image datasets from patients with AF, with 30 images taken before and 30 after RFCA for the treatment of AF. A reference standard for scar and fibrosis was established by merging manual segmentations from three observers. Furthermore, scar was also quantified using 2, 3 and 4 standard deviations (SD) and full-width-at-half-maximum (FWHM) methods. Seven institutions responded to the challenge: Imperial College (IC), Mevis Fraunhofer (MV), Sunnybrook Health Sciences (SY), Harvard/Boston University (HB), Yale School of Medicine (YL), King’s College London (KCL) and Utah CARMA (UTA, UTB). There were 8 different algorithms evaluated in this study. Results Some algorithms were able to perform significantly better than SD and FWHM methods in both pre- and post-ablation imaging. Segmentation in pre-ablation images was challenging and good correlation with the reference standard was found in post-ablation images. Overlap scores (out of 100) with the reference standard were as follows: Pre: IC = 37, MV = 22, SY = 17, YL = 48, KCL = 30, UTA = 42, UTB = 45; Post: IC = 76, MV = 85, SY = 73, HB = 76, YL = 84, KCL = 78, UTA = 78, UTB = 72. Conclusions The study concludes that currently no algorithm is deemed clearly better than

  10. Discuss on the two algorithms of line-segments and dot-array for region judgement of the sub-satellite purview

    NASA Astrophysics Data System (ADS)

    Nie, Hao; Yang, Mingming; Zhu, Yajie; Zhang, Peng

    2015-04-01

    When satellite is flying on the orbit for special task like solar flare observation, it requires knowing if the sub-satellite purview was in the ocean area. The relative position between sub-satellite point and the coastline is varying, so the observation condition need be judged in real time according to the current orbital elements. The problem is to solve the status of the relative position between the rectangle purview and the multi connected regions formed by the base data of coastline. Usually the Cohen-Sutherland algorithm is adopted to get the status. It divides the earth map to 9 sections by the four lines extended the rectangle sides. Then the coordinate of boundary points of the connected regions in which section should be confirmed. That method traverses all the boundary points for each judgement. In this paper, two algorithms are presented. The one is based on line-segments, another is based on dot-array. And the data preprocessing and judging procedure of the two methods are focused. The peculiarity of two methods is also analyzed. The method of line-segments treats the connected regions as a set of series line segments. In order to solve the problem, the terminals' coordinates of the rectangle purview and the line segments at the same latitude are compared. The method of dot-array translates the whole map to a binary image, which can be equal to a dot array. The value set of the sequence pixels in the dot array is gained. The value of the pixels in the rectangle purview is judged to solve the problem. Those two algorithms consume lower soft resource, and reduce much more comparing times because both of them do not need traverse all the boundary points. The analysis indicates that the real-time performance and consumed resource of the two algorithms are similar for the simple coastline, but the method of dot-array is the choice when coastline is quite complicated.

  11. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    PubMed

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  12. Computerized Segmentation and Characterization of Breast Lesions in Dynamic Contrast-Enhanced MR Images Using Fuzzy c-Means Clustering and Snake Algorithm

    PubMed Central

    Pang, Yachun; Li, Li; Hu, Wenyong; Peng, Yanxia; Liu, Lizhi; Shao, Yuanzhi

    2012-01-01

    This paper presents a novel two-step approach that incorporates fuzzy c-means (FCMs) clustering and gradient vector flow (GVF) snake algorithm for lesions contour segmentation on breast magnetic resonance imaging (BMRI). Manual delineation of the lesions by expert MR radiologists was taken as a reference standard in evaluating the computerized segmentation approach. The proposed algorithm was also compared with the FCMs clustering based method. With a database of 60 mass-like lesions (22 benign and 38 malignant cases), the proposed method demonstrated sufficiently good segmentation performance. The morphological and texture features were extracted and used to classify the benign and malignant lesions based on the proposed computerized segmentation contour and radiologists' delineation, respectively. Features extracted by the computerized characterization method were employed to differentiate the lesions with an area under the receiver-operating characteristic curve (AUC) of 0.968, in comparison with an AUC of 0.914 based on the features extracted from radiologists' delineation. The proposed method in current study can assist radiologists to delineate and characterize BMRI lesion, such as quantifying morphological and texture features and improving the objectivity and efficiency of BMRI interpretation with a certain clinical value. PMID:22952558

  13. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  14. Improving Cerebellar Segmentation with Statistical Fusion

    PubMed Central

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-01-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multi-atlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non-Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution. PMID:27127334

  15. Improving Cerebellar Segmentation with Statistical Fusion.

    PubMed

    Plassard, Andrew J; Yang, Zhen; Prince, Jerry L; Claassen, Daniel O; Landman, Bennett A

    2016-02-27

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multi-atlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non-Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  16. Regulatory Anatomy

    PubMed Central

    2015-01-01

    This article proposes the term “safety logics” to understand attempts within the European Union (EU) to harmonize member state legislation to ensure a safe and stable supply of human biological material for transplants and transfusions. With safety logics, I refer to assemblages of discourses, legal documents, technological devices, organizational structures, and work practices aimed at minimizing risk. I use this term to reorient the analytical attention with respect to safety regulation. Instead of evaluating whether safety is achieved, the point is to explore the types of “safety” produced through these logics as well as to consider the sometimes unintended consequences of such safety work. In fact, the EU rules have been giving rise to complaints from practitioners finding the directives problematic and inadequate. In this article, I explore the problems practitioners face and why they arise. In short, I expose the regulatory anatomy of the policy landscape. PMID:26139952

  17. [Forbidden anatomy].

    PubMed

    Holck, Per

    2004-12-16

    Since centuries anatomists have used any course of action in order to get hold of material for dissections, and at the same time avoid prosecution for grave robbery, at times the only way to get hold of cadavers. Stealing newly dead people from the churchyards and offering them for sale to anatomical institutions was not uncommon in the 19th century. "Resurrectionists"--as these thieves were called, as they made the dead "alive"--were seen as necessary for the teaching of anatomy in Victorian Britain. In the 1820s a scandal was revealed in Scotland, when it was discovered that some people even committed murder to make money from supplying anatomists with human cadavers. Two men, William Burke and William Hare, became particularly notorious because of their "business" with the celebrated anatomist Robert Knox in Edinburgh.

  18. A novel algorithm based on visual saliency attention for localization and segmentation in rapidly-stained leukocyte images.

    PubMed

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Chen, Zhong

    2014-01-01

    In this paper, we propose a fast hierarchical framework of leukocyte localization and segmentation in rapidly-stained leukocyte images (RSLI) with complex backgrounds and varying illumination. The proposed framework contains two main steps. First, a nucleus saliency model based on average absolute difference is built, which locates each leukocyte precisely while effectively removes dyeing impurities and erythrocyte fragments. Secondly, two different schemes are presented for segmenting the nuclei and cytoplasm respectively. As for nuclei segmentation, to solve the overlap problem between leukocytes, we extract the nucleus lobes first and further group them. The lobes extraction is realized by the histogram-based contrast map and watershed segmentation, taking into account the saliency and similarity of nucleus color. Meanwhile, as for cytoplasm segmentation, to extract the blurry contour of the cytoplasm under instable illumination, we propose a cytoplasm enhancement based on tri-modal histogram specification, which specifically improves the contrast of cytoplasm while maintaining others. Then, the contour of cytoplasm is quickly obtained by extraction based on parameter-controlled adaptive attention window. Furthermore, the contour is corrected by concave points matching in order to solve the overlap between leukocytes and impurities. The experiments show the effectiveness of the proposed nucleus saliency model, which achieves average localization accuracy with F1-measure greater than 95%. In addition, the comparison of single leukocyte segmentation accuracy and running time has demonstrated that the proposed segmentation scheme outperforms the former approaches in RSLI.

  19. Left atrium segmentation for atrial fibrillation ablation

    NASA Astrophysics Data System (ADS)

    Karim, R.; Mohiaddin, R.; Rueckert, D.

    2008-03-01

    Segmentation of the left atrium is vital for pre-operative assessment of its anatomy in radio-frequency catheter ablation (RFCA) surgery. RFCA is commonly used for treating atrial fibrillation. In this paper we present an semi-automatic approach for segmenting the left atrium and the pulmonary veins from MR angiography (MRA) data sets. We also present an automatic approach for further subdividing the segmented atrium into the atrium body and the pulmonary veins. The segmentation algorithm is based on the notion that in MRA the atrium becomes connected to surrounding structures via partial volume affected voxels and narrow vessels, the atrium can be separated if these regions are characterized and identified. The blood pool, obtained by subtracting the pre- and post-contrast scans, is first segmented using a region-growing approach. The segmented blood pool is then subdivided into disjoint subdivisions based on its Euclidean distance transform. These subdivisions are then merged automatically starting from a seed point and stopping at points where the atrium leaks into a neighbouring structure. The resulting merged subdivisions produce the segmented atrium. Measuring the size of the pulmonary vein ostium is vital for selecting the optimal Lasso catheter diameter. We present a second technique for automatically identifying the atrium body from segmented left atrium images. The separating surface between the atrium body and the pulmonary veins gives the ostia locations and can play an important role in measuring their diameters. The technique relies on evolving interfaces modelled using level sets. Results have been presented on 20 patient MRA datasets.

  20. Algorithme d'optimisation du profil vertical pour un segment de vol en croisiere avec une contrainte d'heure d'arrivee requise

    NASA Astrophysics Data System (ADS)

    Dancila, Radu Ioan

    This thesis presents the development of an algorithm that determines the optimal vertical navigation (VNAV) profile for an aircraft flying a cruise segment, along a given lateral navigation (LNAV) profile, with a required time of arrival (RTA) constraint. The algorithm is intended for implementation into a Flight Management System (FMS) as a new feature that gives advisory information regarding the optimal VNAV profile. The optimization objective is to minimize the total cost associated with flying the cruise segment while arriving at the end of the segment within an imposed time window. For the vertical navigation profiles yielding a time of arrival within the imposed limits, the degree of fulfillment of the RTA constraint is quantified by a cost proportional with the absolute value of the difference between the actual time of arrival and the RTA. The VNAV profiles evaluated in this thesis are characterized by identical altitudes at the beginning and at the end of the profile, they have no more than one step altitude and are flown at constant speed. The acceleration and deceleration segments are not taken into account. The altitude and speed ranges to be used for the VNAV profiles are specified as input parameters for the algorithm. The algorithm described in this thesis is developed in MATLAB. At each altitude, in the range of altitudes considered for the VNAV profiles, a binary search is performed in order to identify the speed interval that yields a time of arrival compatible with the RTA constraint and the profile that produces a minimum total cost is retained. The performance parameters that determine the total cost for flying a particular VNAV profile, the fuel burn and the flight time, are calculated based on the aircraft's specific performance data and configuration, climb/descent profile, the altitude at the beginning of the VNAV profile, the VNAV and LNAV profiles and the atmospheric conditions. These calculations were validated using data generated by a

  1. Nail anatomy.

    PubMed

    de Berker, David

    2013-01-01

    The nail unit comprises the nail plate, the surrounding soft tissues, and their vasculature and innervation based upon the distal phalanx. The nail plate is a laminated keratinized structure lying on the nail matrix (15-25%), the nail bed with its distal onychodermal band (75-85%), and the hyponychium at its free edge. The distal part of the matrix, the lunula characterized by its half-moon shape, can be observed in some digits. The nail plate is embedded by the proximal and lateral folds. From the proximal nail fold, the cuticle (also known as the eponychium), adheres to the superficial surface of the proximal nail plate. The nail unit possesses a complex and abundant vascular network to ensure adequate blood supply. Finally, both the periungual soft tissues and the nail folds are innervated. The shapes, structure, and inter-relationships of these tissues are factors in the way nails present with disease and how we understand and manage those diseases. In particular, an understanding of the surgical anatomy is important for those undertaking diagnostic or curative operations on the nail. With this knowledge, the most appropriate surgery can be planned and the patient can be provided with accurate and clear guidance to enable informed consent.

  2. Quick Dissection of the Segmental Bronchi

    ERIC Educational Resources Information Center

    Nakajima, Yuji

    2010-01-01

    Knowledge of the three-dimensional anatomy of the bronchopulmonary segments is essential for respiratory medicine. This report describes a quick guide for dissecting the segmental bronchi in formaldehyde-fixed human material. All segmental bronchi are easy to dissect, and thus, this exercise will help medical students to better understand the…

  3. Automated lung segmentation of low resolution CT scans of rats

    NASA Astrophysics Data System (ADS)

    Rizzo, Benjamin M.; Haworth, Steven T.; Clough, Anne V.

    2014-03-01

    Dual modality micro-CT and SPECT imaging can play an important role in preclinical studies designed to investigate mechanisms, progression, and therapies for acute lung injury in rats. SPECT imaging involves examining the uptake of radiopharmaceuticals within the lung, with the hypothesis that uptake is sensitive to the health or disease status of the lung tissue. Methods of quantifying lung uptake and comparison of right and left lung uptake generally begin with identifying and segmenting the lung region within the 3D reconstructed SPECT volume. However, identification of the lung boundaries and the fissure between the left and right lung is not always possible from the SPECT images directly since the radiopharmaceutical may be taken up by other surrounding tissues. Thus, our SPECT protocol begins with a fast CT scan, the lung boundaries are identified from the CT volume, and the CT region is coregistered with the SPECT volume to obtain the SPECT lung region. Segmenting rat lungs within the CT volume is particularly challenging due to the relatively low resolution of the images and the rat's unique anatomy. Thus, we have developed an automated segmentation algorithm for low resolution micro-CT scans that utilizes depth maps to detect fissures on the surface of the lung volume. The fissure's surface location is in turn used to interpolate the fissure throughout the lung volume. Results indicate that the segmentation method results in left and right lung regions consistent with rat lung anatomy.

  4. Algorithm for localized adaptive diffuse optical tomography and its application in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Naser, Mohamed A.; Patterson, Michael S.; Wong, John W.

    2014-04-01

    A reconstruction algorithm for diffuse optical tomography based on diffusion theory and finite element method is described. The algorithm reconstructs the optical properties in a permissible domain or region-of-interest to reduce the number of unknowns. The algorithm can be used to reconstruct optical properties for a segmented object (where a CT-scan or MRI is available) or a non-segmented object. For the latter, an adaptive segmentation algorithm merges contiguous regions with similar optical properties thereby reducing the number of unknowns. In calculating the Jacobian matrix the algorithm uses an efficient direct method so the required time is comparable to that needed for a single forward calculation. The reconstructed optical properties using segmented, non-segmented, and adaptively segmented 3D mouse anatomy (MOBY) are used to perform bioluminescence tomography (BLT) for two simulated internal sources. The BLT results suggest that the accuracy of reconstruction of total source power obtained without the segmentation provided by an auxiliary imaging method such as x-ray CT is comparable to that obtained when using perfect segmentation.

  5. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  6. Anatomy of the Brain

    MedlinePlus

    ... Young Adult Guidelines For brain tumor information and support Call: 800-886-ABTA (2282) or Complete our contact form Brain Tumor Information Brain Anatomy Brain Structure Neuron Anatomy Brain Tumor Symptoms Diagnosis Types of ...

  7. GPU-based relative fuzzy connectedness image segmentation

    SciTech Connect

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  8. Partial volume effect modeling for segmentation and tissue classification of brain magnetic resonance images: A review.

    PubMed

    Tohka, Jussi

    2014-11-28

    Quantitative analysis of magnetic resonance (MR) brain images are facilitated by the development of automated segmentation algorithms. A single image voxel may contain of several types of tissues due to the finite spatial resolution of the imaging device. This phenomenon, termed partial volume effect (PVE), complicates the segmentation process, and, due to the complexity of human brain anatomy, the PVE is an important factor for accurate brain structure quantification. Partial volume estimation refers to a generalized segmentation task where the amount of each tissue type within each voxel is solved. This review aims to provide a systematic, tutorial-like overview and categorization of methods for partial volume estimation in brain MRI. The review concentrates on the statistically based approaches for partial volume estimation and also explains differences to other, similar image segmentation approaches.

  9. Medical image segmentation by MDP model

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2011-11-01

    MDP (Dirichlet Process Mixtures) model is applied to segment medical images in this paper. Segmentation can been automatically done without initializing segmentation class numbers. The MDP model segmentation algorithm is used to segment natural images and MR (Magnetic Resonance) images in the paper. To demonstrate the accuracy of the MDP model segmentation algorithm, many compared experiments, such as EM (Expectation Maximization) image segmentation algorithm, K-means image segmentation algorithm and MRF (Markov Field) image segmentation algorithm, have been done to segment medical MR images. All the methods are also analyzed quantitatively by using DSC (Dice Similarity Coefficients). The experiments results show that DSC of MDP model segmentation algorithm of all slices exceed 90%, which show that the proposed method is robust and accurate.

  10. Ensemble segmentation using efficient integer linear programming.

    PubMed

    Alush, Amir; Goldberger, Jacob

    2012-10-01

    We present a method for combining several segmentations of an image into a single one that in some sense is the average segmentation in order to achieve a more reliable and accurate segmentation result. The goal is to find a point in the "space of segmentations" which is close to all the individual segmentations. We present an algorithm for segmentation averaging. The image is first oversegmented into superpixels. Next, each segmentation is projected onto the superpixel map. An instance of the EM algorithm combined with integer linear programming is applied on the set of binary merging decisions of neighboring superpixels to obtain the average segmentation. Apart from segmentation averaging, the algorithm also reports the reliability of each segmentation. The performance of the proposed algorithm is demonstrated on manually annotated images from the Berkeley segmentation data set and on the results of automatic segmentation algorithms.

  11. Anatomic verification of automatic segmentation algorithms for precise intrascalar localization of cochlear implant electrodes in adult temporal bones using clinically-available computed tomography

    PubMed Central

    Schuman, Theodore A.; Noble, Jack H.; Wright, Charles G.; Wanna, George; Dawant, Benoit; Labadie, Robert F.

    2015-01-01

    Objectives/Hypothesis We have previously described a novel, automated, non-rigid, model-based method for determining the intrascalar position of cochlear implant (CI) electrodes arrays within human temporal bones using clinically available, flat-panel volume computed tomography (fpVCT). We sought to validate this method by correlating results with anatomic microdissection of CI arrays in cadaveric bones. Study Design Basic science. Methods Seven adult cadaveric temporal bones were imaged using fpVCT before and after electrode insertion. Using a statistical model of intra-cochlear anatomy an active shape model optimization approach was then used to identify the scala tympani and vestibuli on the pre-intervention fpVCT. The array position was estimated by identifying its midline on the post-intervention scan and superimposing it onto the pre-intervention images using rigid registration. Specimens were then microdissected to demonstrate the actual array position. Results Using microdissection as the standard for ascertaining electrode position, the automatic identifications of the basilar membrane coupled with post-intervention fpVCT for electrode position identification accurately depicted the array location in all seven bones. In four specimens, the array remained within the scala tympani; in three the basilar membrane was breached. Conclusions We have anatomically validated the automated method for predicting the intrascalar location of CI arrays using CT. Using this algorithm and pre- and post-intervention CT, rapid feedback regarding implant location and expected audiological outcomes could be obtained in clinical settings. PMID:20939074

  12. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  13. Interior segment regrowth configurational-bias algorithm for the efficient sampling and fast relaxation of coarse-grained polyethylene and polyoxyethylene melts on a high coordination lattice

    NASA Astrophysics Data System (ADS)

    Rane, Sagar S.; Mattice, Wayne L.

    2005-06-01

    We demonstrate the application of a modified form of the configurational-bias algorithm for the simulation of chain molecules on the second-nearest-neighbor-diamond lattice. Using polyethylene and poly(ethylene-oxide) as model systems we show that the present configurational-bias algorithm can increase the speed of the equilibration by at least a factor of 2-3 or more as compared to the previous method of using a combination of single-bead and pivot moves along with the Metropolis sampling scheme [N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953)]. The increase in the speed of the equilibration is found to be dependent on the interactions (i.e., the polymer being simulated) and the molecular weight of the chains. In addition, other factors not considered, such as the density, would also have a significant effect. The algorithm is an extension of the conventional configurational-bias method adapted to the regrowth of interior segments of chain molecules. Appropriate biasing probabilities for the trial moves as outlined by Jain and de Pablo for the configurational-bias scheme of chain ends, suitably modified for the interior segments, are utilized [T. S. Jain and J. J. de Pablo, in Simulation Methods for Polymers, edited by M. Kotelyanskii and D. N. Theodorou (Marcel Dekker, New York, 2004), pp. 223-255]. The biasing scheme satisfies the condition of detailed balance and produces efficient sampling with the correct equilibrium probability distribution of states. The method of interior regrowth overcomes the limitations of the original configurational-bias scheme and allows for the simulation of polymers of higher molecular weight linear chains and ring polymers which lack chain ends.

  14. Influence of reconstruction settings on the performance of adaptive thresholding algorithms for FDG-PET image segmentation in radiotherapy planning.

    PubMed

    Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-30

    The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.

  15. Segmentation of diesel spray images with log-likelihood ratio test algorithm for non-Gaussian distributions.

    PubMed

    Pastor, José V; Arrègle, Jean; García, José M; Zapata, L Daniel

    2007-02-20

    A methodology for processing images of diesel sprays under different experimental situations is presented. The new approach has been developed for cases where the background does not follow a Gaussian distribution but a positive bias appears. In such cases, the lognormal and the gamma probability density functions have been considered for the background digital level distributions. Two different algorithms have been compared with the standard log-likelihood ratio test (LRT): a threshold defined from the cumulative probability density function of the background shows a sensitive improvement, but the best results are obtained with modified versions of the LRT algorithm adapted to non-Gaussian cases.

  16. Segmentation of the whole breast from low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Salvatore, Mary; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    The segmentation of whole breast serves as the first step towards automated breast lesion detection. It is also necessary for automatically assessing the breast density, which is considered to be an important risk factor for breast cancer. In this paper we present a fully automated algorithm to segment the whole breast in low-dose chest CT images (LDCT), which has been recommended as an annual lung cancer screening test. The automated whole breast segmentation and potential breast density readings as well as lesion detection in LDCT will provide useful information for women who have received LDCT screening, especially the ones who have not undergone mammographic screening, by providing them additional risk indicators for breast cancer with no additional radiation exposure. The two main challenges to be addressed are significant range of variations in terms of the shape and location of the breast in LDCT and the separation of pectoral muscles from the glandular tissues. The presented algorithm achieves robust whole breast segmentation using an anatomy directed rule-based method. The evaluation is performed on 20 LDCT scans by comparing the segmentation with ground truth manually annotated by a radiologist on one axial slice and two sagittal slices for each scan. The resulting average Dice coefficient is 0.880 with a standard deviation of 0.058, demonstrating that the automated segmentation algorithm achieves results consistent with manual annotations of a radiologist.

  17. Morphology-driven automatic segmentation of MR images of the neonatal brain.

    PubMed

    Gui, Laura; Lisowski, Radoslaw; Faundez, Tamara; Hüppi, Petra S; Lazeyras, François; Kocher, Michel

    2012-12-01

    The segmentation of MR images of the neonatal brain is an essential step in the study and evaluation of infant brain development. State-of-the-art methods for adult brain MRI segmentation are not applicable to the neonatal brain, due to large differences in structure and tissue properties between newborn and adult brains. Existing newborn brain MRI segmentation methods either rely on manual interaction or require the use of atlases or templates, which unavoidably introduces a bias of the results towards the population that was used to derive the atlases. We propose a different approach for the segmentation of neonatal brain MRI, based on the infusion of high-level brain morphology knowledge, regarding relative tissue location, connectivity and structure. Our method does not require manual interaction, or the use of an atlas, and the generality of its priors makes it applicable to different neonatal populations, while avoiding atlas-related bias. The proposed algorithm segments the brain both globally (intracranial cavity, cerebellum, brainstem and the two hemispheres) and at tissue level (cortical and subcortical gray matter, myelinated and unmyelinated white matter, and cerebrospinal fluid). We validate our algorithm through visual inspection by medical experts, as well as by quantitative comparisons that demonstrate good agreement with expert manual segmentations. The algorithm's robustness is verified by testing on variable quality images acquired on different machines, and on subjects with variable anatomy (enlarged ventricles, preterm- vs. term-born).

  18. Intensity-Based Skeletonization of CryoEM Gray-Scale Images Using a True Segmentation-Free Algorithm

    PubMed Central

    Nasr, Kamal Al; Liu, Chunmei; Rwebangira, Mugizi; Burge, Legand; He, Jing

    2014-01-01

    Cryo-electron microscopy is an experimental technique that is able to produce 3D gray-scale images of protein molecules. In contrast to other experimental techniques, cryo-electron microscopy is capable of visualizing large molecular complexes such as viruses and ribosomes. At medium resolution, the positions of the atoms are not visible and the process cannot proceed. The medium-resolution images produced by cryo-electron microscopy are used to derive the atomic structure of the proteins in de novo modeling. The skeletons of the 3D gray-scale images are used to interpret important information that is helpful in de novo modeling. Unfortunately, not all features of the image can be captured using a single segmentation. In this paper, we present a segmentation-free approach to extract the gray-scale curve-like skeletons. The approach relies on a novel representation of the 3D image, where the image is modeled as a graph and a set of volume trees. A test containing 36 synthesized maps and one authentic map shows that our approach can improve the performance of the two tested tools used in de novo modeling. The improvements were 62 and 13 percent for Gorgon and DP-TOSS, respectively. PMID:24384713

  19. Multi-atlas segmentation for abdominal organs with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-03-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  20. Multi-Atlas Segmentation for Abdominal Organs with Gaussian Mixture Models.

    PubMed

    Burke, Ryan P; Xu, Zhoubing; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A

    2015-03-17

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid/gray matter/white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  1. Improving the robustness of interventional 4D ultrasound segmentation through the use of personalized prior shape models

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.

    2015-03-01

    While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.

  2. Computerized segmentation algorithm with personalized atlases of murine MRIs in a SV40 large T-antigen mouse mammary cancer model

    NASA Astrophysics Data System (ADS)

    Sibley, Adam R.; Markiewicz, Erica; Mustafi, Devkumar; Fan, Xiaobing; Conzen, Suzanne; Karczmar, Greg; Giger, Maryellen L.

    2016-03-01

    Quantities of MRI data, much larger than can be objectively and efficiently analyzed manually, are routinely generated in preclinical research. We aim to develop an automated image segmentation and registration pipeline to aid in analysis of image data from our high-throughput 9.4 Tesla small animal MRI imaging center. T2-weighted, fat-suppressed MRIs were acquired over 4 life-cycle time-points [up to 12 to 18 weeks] of twelve C3(1) SV40 Large T-antigen mice for a total of 46 T2-weighted MRI volumes; each with a matrix size of 192 x 256, 62 slices, in plane resolution 0.1 mm, and slice thickness 0.5 mm. These image sets were acquired with the goal of tracking and quantifying progression of mammary intraepithelial neoplasia (MIN) to invasive cancer in mice, believed to be similar to ductal carcinoma in situ (DCIS) in humans. Our segmentation algorithm takes 2D seed-points drawn by the user at the center of the 4 co-registered volumes associated with each mouse. The level set then evolves in 3D from these 2D seeds. The contour evolution incorporates texture information, edge information, and a statistical shape model in a two-step process. Volumetric DICE coefficients comparing the automatic with manual segmentations were computed and ranged between 0.75 and 0.58 for averages over the 4 life-cycle time points of the mice. Incorporation of these personalized atlases with intra and inter mouse registration is expected to enable locally and globally tracking of the morphological and textural changes in the mammary tissue and associated lesions of these mice.

  3. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  4. Patellofemoral anatomy and biomechanics.

    PubMed

    Sherman, Seth L; Plackis, Andreas C; Nuelle, Clayton W

    2014-07-01

    Patellofemoral disorders are common. There is a broad spectrum of disease, ranging from patellofemoral pain and instability to focal cartilage disease and arthritis. Regardless of the specific condition, abnormal anatomy and biomechanics are often the root cause of patellofemoral dysfunction. A thorough understanding of normal patellofemoral anatomy and biomechanics is critical for the treating physician. Recognizing and addressing abnormal anatomy will optimize patellofemoral biomechanics and may ultimately translate into clinical success.

  5. Robust Optic Nerve Segmentation on Clinically Acquired CT.

    PubMed

    Panda, Swetasudha; Asman, Andrew J; Delisi, Michael P; Mawn, Louise A; Galloway, Robert L; Landman, Bennett A

    2014-03-21

    The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image field-of-view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.

  6. Robust optic nerve segmentation on clinically acquired CT

    NASA Astrophysics Data System (ADS)

    Panda, Swetasudha; Asman, Andrew J.; DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.; Landman, Bennett A.

    2014-03-01

    The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image fieldof- view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.

  7. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform

    PubMed Central

    Rezaee, Kh.; Haddadnia, J.

    2013-01-01

    Background: Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. Objective: This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. Method: In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters’ number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. Results: We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. Conclusion: The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output. PMID:25505753

  8. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  9. Clinical anatomy of the subserous layer: An amalgamation of gross and clinical anatomy.

    PubMed

    Yabuki, Yoshihiko

    2016-05-01

    The 1998 edition of Terminologia Anatomica introduced some currently used clinical anatomical terms for the pelvic connective tissue or subserous layer. These innovations persuaded the present author to consider a format in which the clinical anatomical terms could be reconciled with those of gross anatomy and incorporated into a single anatomical glossary without contradiction or ambiguity. Specific studies on the subserous layer were undertaken on 79 Japanese women who had undergone surgery for uterine cervical cancer, and on 26 female cadavers that were dissected, 17 being formalin-fixed and 9 fresh. The results were as follows: (a) the subserous layer could be segmentalized by surgical dissection in the perpendicular, horizontal and sagittal planes; (b) the segmentalized subserous layer corresponded to 12 cubes, or ligaments, of minimal dimension that enabled the pelvic organs to be extirpated; (c) each ligament had a three-dimensional (3D) structure comprising craniocaudal, mediolateral, and dorsoventral directions vis-á-vis the pelvic axis; (d) these 3D-structured ligaments were encoded morphologically in order of decreasing length; and (e) using these codes, all the surgical procedures for 19th century to present-day radical hysterectomy could be expressed symbolically. The establishment of clinical anatomical terms, represented symbolically through coding as demonstrated in this article, could provide common ground for amalgamating clinical anatomy with gross anatomy. Consequently, terms in clinical anatomy and gross anatomy could be reconciled and compiled into a single anatomical glossary.

  10. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  11. Illustration of the obstacles in computerized lung segmentation using examples

    PubMed Central

    Meng, Xin; Qiang, Yongqian; Zhu, Shaocheng; Fuhrman, Carl; Siegfried, Jill M.; Pu, Jiantao

    2012-01-01

    Purpose: Automated lung volume segmentation is often a preprocessing step in quantitative lung computed tomography (CT) image analysis. The objective of this study is to identify the obstacles in computerized lung volume segmentation and illustrate those explicitly using real examples. Awareness of these “difficult” cases may be helpful for the development of a robust and consistent lung segmentation algorithm. Methods: We collected a large diverse dataset consisting of 2768 chest CT examinations acquired on 2292 subjects from various sources. These examinations cover a wide range of diseases, including lung cancer, chronic obstructive pulmonary disease, human immunodeficiency virus, pulmonary embolism, pneumonia, asthma, and interstitial lung disease (ILD). The CT acquisition protocols, including dose, scanners, and reconstruction kernels, vary significantly. After the application of a “neutral” thresholding-based approach to the collected CT examinations in a batch manner, the failed cases were subjectively identified and classified into different subgroups. Results: Totally, 121 failed examinations are identified, corresponding to a failure ratio of 4.4%. These failed cases are summarized as 11 different subgroups, which is further classified into 3 broad categories: (1) failure caused by diseases, (2) failure caused by anatomy variability, and (3) failure caused by external factors. The failure percentages in these categories are 62.0%, 32.2%, and 5.8%, respectively. Conclusions: The presence of specific lung diseases (e.g., pulmonary nodules, ILD, and pneumonia) is the primary issue in computerized lung segmentation. The segmentation failures caused by external factors and anatomy variety are relatively low but unavoidable in practice. It is desirable to develop robust schemes to handle these issues in a single pass when a large number of CT examinations need to be analyzed. PMID:22894423

  12. Fast inter-mode decision algorithm for high-efficiency video coding based on similarity of coding unit segmentation and partition mode between two temporally adjacent frames

    NASA Astrophysics Data System (ADS)

    Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo; Li, Yuan

    2013-04-01

    High-efficiency video coding (HEVC) introduces a flexible hierarchy of three block structures: coding unit (CU), prediction unit (PU), and transform unit (TU), which have brought about higher coding efficiency than the current national video coding standard H.264/advanced video coding (AVC). HEVC, however, simultaneously requires higher computational complexity than H.264/AVC, although several fast inter-mode decisions were proposed in its development. To further reduce this complexity, a fast inter-mode decision algorithm is proposed based on temporal correlation. Because of the distinct difference of inter-prediction block between HEVC and H.264/AVC, in order to use the temporal correlation to speed up the inter prediction, the correlation of inter-prediction between two adjacent frames needs to be analyzed according to the structure of CU and PU in HEVC. The probabilities of all the partition modes in all sizes of CU and the similarity of CU segmentation and partition modes between two adjacent frames are tested. The correlation of partition modes between two CUs with different sizes in two adjacent frames is tested and analyzed. Based on the characteristics tested and analyzed, at most, two prior partition modes are evaluated for each level of CU, which reduces the number of rate distortion cost calculations. The simulation results show that the proposed algorithm further reduces coding time by 33.0% to 43.3%, with negligible loss in bitrate and peak signal-to-noise ratio, on the basis of the fast inter-mode decision algorithms in current HEVC reference software HM7.0.

  13. Anatomy Comic Strips

    ERIC Educational Resources Information Center

    Park, Jin Seo; Kim, Dae Hyun; Chung, Min Suk

    2011-01-01

    Comics are powerful visual messages that convey immediate visceral meaning in ways that conventional texts often cannot. This article's authors created comic strips to teach anatomy more interestingly and effectively. Four-frame comic strips were conceptualized from a set of anatomy-related humorous stories gathered from the authors' collective…

  14. Anatomy: Spotlight on Africa

    ERIC Educational Resources Information Center

    Kramer, Beverley; Pather, Nalini; Ihunwo, Amadi O.

    2008-01-01

    Anatomy departments across Africa were surveyed regarding the type of curriculum and method of delivery of their medical courses. While the response rate was low, African anatomy departments appear to be in line with the rest of the world in that many have introduced problem based learning, have hours that are within the range of western medical…

  15. Anatomy comic strips.

    PubMed

    Park, Jin Seo; Kim, Dae Hyun; Chung, Min Suk

    2011-01-01

    Comics are powerful visual messages that convey immediate visceral meaning in ways that conventional texts often cannot. This article's authors created comic strips to teach anatomy more interestingly and effectively. Four-frame comic strips were conceptualized from a set of anatomy-related humorous stories gathered from the authors' collective imagination. The comics were drawn on paper and then recreated with digital graphics software. More than 500 comic strips have been drawn and labeled in Korean language, and some of them have been translated into English. All comic strips can be viewed on the Department of Anatomy homepage at the Ajou University School of Medicine, Suwon, Republic of Korea. The comic strips were written and drawn by experienced anatomists, and responses from viewers have generally been favorable. These anatomy comic strips, designed to help students learn the complexities of anatomy in a straightforward and humorous way, are expected to be improved further by the authors and other interested anatomists.

  16. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  17. Head segmentation in vertebrates

    PubMed Central

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Here again, a basic segmental plan for the head has been sought among chordates. We convened a symposium that brought together leading researchers dealing with this problem, in a number of different evolutionary and developmental contexts. Here we give an overview of the outcome and the status of the field in this modern era of Evo-Devo. We emphasize the fact that the head segmentation problem is not fully resolved, and we discuss new directions in the search for hints for a way out of this maze. PMID:20607135

  18. Auxiliary anatomical labels for joint segmentation and atlas registration

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Szekely, Gabor; Goksel, Orcun

    2014-03-01

    This paper studies improving joint segmentation and registration by introducing auxiliary labels for anatomy that has similar appearance to the target anatomy while not being part of that target. Such auxiliary labels help avoid false positive labelling of non-target anatomy by resolving ambiguity. A known registration of a segmented atlas can help identify where a target segmentation should lie. Conversely, segmentations of anatomy in two images can help them be better registered. Joint segmentation and registration is then a method that can leverage information from both registration and segmentation to help one another. It has received increasing attention recently in the literature. Often, merely a single organ of interest is labelled in the atlas. In the presense of other anatomical structures with similar appearance, this leads to ambiguity in intensity based segmentation; for example, when segmenting individual bones in CT images where other bones share the same intensity profile. To alleviate this problem, we introduce automatic generation of additional labels in atlas segmentations, by marking similar-appearance non-target anatomy with an auxiliary label. Information from the auxiliary-labeled atlas segmentation is then incorporated by using a novel coherence potential, which penalizes differences between the deformed atlas segmentation and the target segmentation estimate. We validated this on a joint segmentation-registration approach that iteratively alternates between registering an atlas and segmenting the target image to find a final anatomical segmentation. The results show that automatic auxiliary labelling outperforms the same approach using a single label atlasses, for both mandibular bone segmentation in 3D-CT and corpus callosum segmentation in 2D-MRI.

  19. Skull Base Anatomy.

    PubMed

    Patel, Chirag R; Fernandez-Miranda, Juan C; Wang, Wei-Hsin; Wang, Eric W

    2016-02-01

    The anatomy of the skull base is complex with multiple neurovascular structures in a small space. Understanding all of the intricate relationships begins with understanding the anatomy of the sphenoid bone. The cavernous sinus contains the carotid artery and some of its branches; cranial nerves III, IV, VI, and V1; and transmits venous blood from multiple sources. The anterior skull base extends to the frontal sinus and is important to understand for sinus surgery and sinonasal malignancies. The clivus protects the brainstem and posterior cranial fossa. A thorough appreciation of the anatomy of these various areas allows for endoscopic endonasal approaches to the skull base.

  20. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z; Folio, Les R; Udupa, Jayaram K; Mollura, Daniel J

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.

  1. Image segmentation and registration algorithm to collect thoracic skeleton semilandmarks for characterization of age and sex-based thoracic morphology variation.

    PubMed

    Weaver, Ashley A; Nguyen, Callistus M; Schoell, Samantha L; Maldjian, Joseph A; Stitzel, Joel D

    2015-12-01

    Thoracic anthropometry variations with age and sex have been reported and likely relate to thoracic injury risk and outcome. The objective of this study was to collect a large volume of homologous semilandmark data from the thoracic skeleton for the purpose of quantifying thoracic morphology variations for males and females of ages 0-100 years. A semi-automated image segmentation and registration algorithm was applied to collect homologous thoracic skeleton semilandmarks from 343 normal computed tomography (CT) scans. Rigid, affine, and symmetric diffeomorphic transformations were used to register semilandmarks from an atlas to homologous locations in the subject-specific coordinate system. Homologous semilandmarks were successfully collected from 92% (7077) of the ribs and 100% (187) of the sternums included in the study. Between 2700 and 11,000 semilandmarks were collected from each rib and sternum and over 55 million total semilandmarks were collected from all subjects. The extensive landmark data collected more fully characterizes thoracic skeleton morphology across ages and sexes. Characterization of thoracic morphology with age and sex may help explain variations in thoracic injury risk and has important implications for vulnerable populations such as pediatrics and the elderly.

  2. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree.

    PubMed

    Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin

    2008-09-01

    We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.

  3. Comparison of a Gross Anatomy Laboratory to Online Anatomy Software for Teaching Anatomy

    ERIC Educational Resources Information Center

    Mathiowetz, Virgil; Yu, Chih-Huang; Quake-Rapp, Cindee

    2016-01-01

    This study was designed to assess the grades, self-perceived learning, and satisfaction between occupational therapy students who used a gross anatomy laboratory versus online anatomy software (AnatomyTV) as tools to learn anatomy at a large public university and a satellite campus in the mid-western United States. The goal was to determine if…

  4. Anatomy and art.

    PubMed

    Laios, Konstantinos; Tsoukalas, Gregory; Karamanou, Marianna; Androutsos, George

    2013-01-01

    Leonardo da Vinci, Jean Falcon, Andreas Vesalius, Henry Gray, Henry Vandyke Carter and Frank Netter created some of the best atlases of anatomy. Their works constitute not only scientific medical projects but also masterpieces of art.

  5. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  6. Keypoint Transfer Segmentation.

    PubMed

    Wachinger, C; Toews, M; Langs, G; Wells, W; Golland, P

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm's robustness enables the segmentation of scans with highly variable field-of-view.

  7. Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation.

    PubMed

    Morra, Jonathan H; Tu, Zhuowen; Apostolova, Liana G; Green, Amity E; Toga, Arthur W; Thompson, Paul M

    2010-01-01

    We compared four automated methods for hippocampal segmentation using different machine learning algorithms: 1) hierarchical AdaBoost, 2) support vector machines (SVM) with manual feature selection, 3) hierarchical SVM with automated feature selection (Ada-SVM), and 4) a publicly available brain segmentation package (FreeSurfer). We trained our approaches using T1-weighted brain MRIs from 30 subjects [10 normal elderly, 10 mild cognitive impairment (MCI), and 10 Alzheimer's disease (AD)], and tested on an independent set of 40 subjects (20 normal, 20 AD). Manually segmented gold standard hippocampal tracings were available for all subjects (training and testing). We assessed each approach's accuracy relative to manual segmentations, and its power to map AD effects. We then converted the segmentations into parametric surfaces to map disease effects on anatomy. After surface reconstruction, we computed significance maps, and overall corrected p-values, for the 3-D profile of shape differences between AD and normal subjects. Our AdaBoost and Ada-SVM segmentations compared favorably with the manual segmentations and detected disease effects as well as FreeSurfer on the data tested. Cumulative p-value plots, in conjunction with the false discovery rate method, were used to examine the power of each method to detect correlations with diagnosis and cognitive scores. We also evaluated how segmentation accuracy depended on the size of the training set, providing practical information for future users of this technique.

  8. Automatic segmentation of choroidal thickness in optical coherence tomography

    PubMed Central

    Alonso-Caneiro, David; Read, Scott A.; Collins, Michael J.

    2013-01-01

    The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data. PMID:24409381

  9. Automatic segmentation of choroidal thickness in optical coherence tomography.

    PubMed

    Alonso-Caneiro, David; Read, Scott A; Collins, Michael J

    2013-01-01

    The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye's normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.

  10. Automatic segmentation of the caudate nucleus from human brain MR images.

    PubMed

    Xia, Yan; Bettinger, Keith; Shen, Lin; Reiss, Allan L

    2007-04-01

    We describe a knowledge-driven algorithm to automatically delineate the caudate nucleus (CN) region of the human brain from a magnetic resonance (MR) image. Since the lateral ventricles (LVs) are good landmarks for positioning the CN, the algorithm first extracts the LVs, and automatically localizes the CN from this information guided by anatomic knowledge of the structure. The face validity of the algorithm was tested with 55 high-resolution T1-weighted magnetic resonance imaging (MRI) datasets, and segmentation results were overlaid onto the original image data for visual inspection. We further evaluated the algorithm by comparing automated segmentation results to a "gold standard" established by human experts for these 55 MR datasets. Quantitative comparison showed a high intraclass correlation between the algorithm and expert as well as high spatial overlap between the regions-of-interest (ROIs) generated from the two methods. The mean spatial overlap +/- standard deviation (defined by the intersection of the 2 ROIs divided by the union of the 2 ROIs) was equal to 0.873 +/- 0.0234. The algorithm has been incorporated into a public domain software program written in Java and, thus, has the potential to be of broad benefit to neuroimaging investigators interested in basal ganglia anatomy and function.

  11. Phasing a segmented telescope

    NASA Astrophysics Data System (ADS)

    Paykin, Irina; Yacobi, Lee; Adler, Joan; Ribak, Erez N.

    2015-02-01

    A crucial part of segmented or multiple-aperture systems is control of the optical path difference between the segments or subapertures. In order to achieve optimal performance we have to phase subapertures to within a fraction of the wavelength, and this requires high accuracy of positioning for each subaperture. We present simulations and hardware realization of a simulated annealing algorithm in an active optical system with sparse segments. In order to align the optical system we applied the optimization algorithm to the image itself. The main advantage of this method over traditional correction methods is that wave-front-sensing hardware and software are no longer required, making the optical and mechanical system much simpler. The results of simulations and laboratory experiments demonstrate the ability of this optimization algorithm to correct both piston and tip-tilt errors.

  12. [Laurentius on anatomy].

    PubMed

    Sawai, Tadashi; Sakai, Tatsuo

    2005-03-01

    Andreas Laurentius wrote Opera anatomica (1593) and Historia anatomica (1600). These books were composed of two types of chapters; 'historia' and 'quaestio'. His description is not original, but take from other anatomists. 'Historia' describes the structure, action and usefulness of the body parts clarified after dissection. 'Quaestio' treats those questions which could not be solved only by dissection. Laurentius cited many previous contradicting interpretations to these questions and choose a best interpretation for the individual questions. In most cases, Laurentius preferred Galen's view. Historia anatomica retained almost all the 'historia' and 'quaestio' from Opera anatomica, and added some new 'historia' and 'quaestio', especially in regard to the components of the body, such as ligaments, membranes, vessels, nerves and glands. Other new 'historia' and 'quaestio' in Historia anatomica concerned several topics on anatomy in general to comprehensively analyze the history of anatomy, methods of anatomy, and usefulness of anatomy. Historia anatomica reviewed what was anatomy by describing in 'historia' what was known and in 'quaestio' what was unresolved. Till now Laurentius's anatomical works have attracted little attention because his description contained few original findings and depended on previous books. However, the important fact that Historia anatomica was very popular in the 17th century tells us that people needed non-original and handbook style of this textbook. Historia anatomica is important for further research on the propagation of anatomical knowledge from professional anatomists to non-professionals in the 17th century.

  13. Geometry Guided Segmentation

    NASA Astrophysics Data System (ADS)

    Dunn, Stanley M.; Liang, Tajen

    1989-03-01

    Our overall goal is to develop an image understanding system for automatically interpreting dental radiographs. This paper describes the module that integrates the intrinsic image data to form the region adjacency graph that represents the image. The specific problem is to develop a robust method for segmenting the image into small regions that do not overlap anatomical boundaries. Classical algorithms for finding homogeneous regions (i.e., 2 class segmentation or connected components) will not always yield correct results since blurred edges can cause adjacent anatomical regions to be labeled as one region. This defect is a problem in this and other applications where an object count is necessary. Our solution to the problem is to guide the segmentation by intrinsic properties of the constituent objects. The module takes a set of intrinsic images as arguments. A connected components-like algorithm is performed, but the connectivity relation is not 4- or 8-neighbor connectivity in binary images; the connectivity is defined in terms of the intrinsic image data. We shall describe both the classical method and the modified segmentation procedures, and present experiments using both algorithms. Our experiments show that for the dental radiographs a segmentation using gray level data in conjunction with edges of the surfaces of teeth give a robust and reliable segmentation.

  14. Keypoint Transfer Segmentation

    PubMed Central

    Toews, M.; Langs, G.; Wells, W.; Golland, P.

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm’s robustness enables the segmentation of scans with highly variable field-of-view. PMID:26221677

  15. [Viennese school of anatomy].

    PubMed

    Angetter, D C

    1999-10-01

    Anatomical science played a minor role in Vienna for centuries until Gerard van Swieten, in the 18th century, recognized the importance of anatomy for medical education. In the 19th century the anatomical school at the University of Vienna development to its height. A new building and a collection of preparations attracted a large number of students. Finally, a second department of anatomy was established. Political ideologies started to affect this institution in the beginning of the 20th century. Anti-Semitism emerged and caused uproars and fights among the students of the two departments. In 1938 both were united under Eduard Pernkopf, a dedicated Nazi and chairman of the department of anatomy, Decan of the medical faculty (1938-1943) and later on President of the University of Vienna (1943-1945). He was suspected of using cadavers of executed persons for the purpose of research and education.

  16. Exercises in anatomy: cardiac isomerism.

    PubMed

    Anderson, Robert H; Sarwark, Anne E; Spicer, Diane E; Backer, Carl L

    2014-01-01

    It is well recognized that the patients with the most complex cardiac malformations are those with so-called visceral heterotaxy. At present, it remains a fact that most investigators segregate these patients on the basis of their splenic anatomy, describing syndromes of so-called asplenia and polysplenia. It has also been known for quite some time, nonetheless, that the morphology of the tracheobronchial tree is usually isomeric in the setting of heterotaxy. And it has been shown that the isomerism found in terms of bronchial arrangement correlates in a better fashion with the cardiac anatomy than does the presence of multiple spleens, or the absence of any splenic tissue. In this exercise in anatomy, we use hearts from the Idriss archive of Lurie Children's Hospital in Chicago to demonstrate the isomeric features found in the hearts obtained from patients known to have had heterotaxy. We first demonstrate the normal arrangements, showing how it is the extent of the pectinate muscles in the atrial appendages relative to the atrioventricular junctions that distinguishes between morphologically right and left atrial chambers. We also show the asymmetry of the normal bronchial tree, and the relationships of the first bronchial branches to the pulmonary arteries supplying the lower lobes of the lungs. We then demonstrate that diagnosis of multiple spleens requires the finding of splenic tissue on either side of the dorsal mesogastrium. Turning to hearts obtained from patients with heterotaxy, we illustrate isomeric right and left atrial appendages. We emphasize that it is only the appendages that are universally isomeric, but point out that other features support the notion of cardiac isomerism. We then show that description also requires a full account of veno-atrial connections, since these can seemingly be mirror-imaged when the arrangement within the heart is one of isomerism of the atrial appendages. We show how failure to recognize the presence of such isomeric

  17. Comparison of a gross anatomy laboratory to online anatomy software for teaching anatomy.

    PubMed

    Mathiowetz, Virgil; Yu, Chih-Huang; Quake-Rapp, Cindee

    2016-01-01

    This study was designed to assess the grades, self-perceived learning, and satisfaction between occupational therapy students who used a gross anatomy laboratory versus online anatomy software (AnatomyTV) as tools to learn anatomy at a large public university and a satellite campus in the mid-western United States. The goal was to determine if equivalent learning outcomes could be achieved regardless of learning tool used. In addition, it was important to determine why students chose the gross anatomy laboratory over online AnatomyTV. A two group, post-test only design was used with data gathered at the end of the course. Primary outcomes were students' grades, self-perceived learning, and satisfaction. In addition, a survey was used to collect descriptive data. One cadaver prosection was available for every four students in the gross anatomy laboratory. AnatomyTV was available online through the university library. At the conclusion of the course, the gross anatomy laboratory group had significantly higher grade percentage, self-perceived learning, and satisfaction than the AnatomyTV group. However, the practical significance of the difference is debatable. The significantly greater time spent in gross anatomy laboratory during the laboratory portion of the course may have affected the study outcomes. In addition, some students may find the difference in (B+) versus (A-) grade as not practically significant. Further research needs to be conducted to identify what specific anatomy teaching resources are most effective beyond prosection for students without access to a gross anatomy laboratory.

  18. Chromosomes and clinical anatomy.

    PubMed

    Gardner, Robert James McKinlay

    2016-07-01

    Chromosome abnormalities may cast light on the nature of mechanisms whereby normal anatomy evolves, and abnormal anatomy arises. Correlating genotype to phenotype is an exercise in which the geneticist and the anatomist can collaborate. The increasing power of the new genetic methodologies is enabling an increasing precision in the delineation of chromosome imbalances, even to the nucleotide level; but the classical skills of careful observation and recording remain as crucial as they always have been. Clin. Anat. 29:540-546, 2016. © 2016 Wiley Periodicals, Inc.

  19. Anatomy for biomedical engineers.

    PubMed

    Carmichael, Stephen W; Robb, Richard A

    2008-01-01

    There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that were of particular interest to them. Following completion of all the dissections, the students presented what they had learned to the entire class in the anatomy laboratory. This course has fulfilled an important need for our students.

  20. Learning Anatomy Enhances Spatial Ability

    ERIC Educational Resources Information Center

    Vorstenbosch, Marc A. T. M.; Klaassen, Tim P. F. M.; Donders, A. R. T.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.

    2013-01-01

    Spatial ability is an important factor in learning anatomy. Students with high scores on a mental rotation test (MRT) systematically score higher on anatomy examinations. This study aims to investigate if learning anatomy also oppositely improves the MRT-score. Five hundred first year students of medicine ("n" = 242, intervention) and…

  1. Illustrated Speech Anatomy.

    ERIC Educational Resources Information Center

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  2. Anatomy of the Honeybee

    ERIC Educational Resources Information Center

    Postiglione, Ralph

    1977-01-01

    In this insect morphology exercise, students study the external anatomy of the worker honeybee. The structures listed and illustrated are discussed in relation to their functions. A goal of the exercise is to establish the bee as a well-adapted, social insect. (MA)

  3. The Anatomy Puzzle Book.

    ERIC Educational Resources Information Center

    Jacob, Willis H.; Carter, Robert, III

    This document features review questions, crossword puzzles, and word search puzzles on human anatomy. Topics include: (1) Anatomical Terminology; (2) The Skeletal System and Joints; (3) The Muscular System; (4) The Nervous System; (5) The Eye and Ear; (6) The Circulatory System and Blood; (7) The Respiratory System; (8) The Urinary System; (9) The…

  4. Anatomy for Biomedical Engineers

    ERIC Educational Resources Information Center

    Carmichael, Stephen W.; Robb, Richard A.

    2008-01-01

    There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that…

  5. Detailed Vascular Anatomy of the Human Retina by Projection-Resolved Optical Coherence Tomography Angiography

    PubMed Central

    Campbell, J. P.; Zhang, M.; Hwang, T. S.; Bailey, S. T.; Wilson, D. J.; Jia, Y.; Huang, D.

    2017-01-01

    Optical coherence tomography angiography (OCTA) is a noninvasive method of 3D imaging of the retinal and choroidal circulations. However, vascular depth discrimination is limited by superficial vessels projecting flow signal artifact onto deeper layers. The projection-resolved (PR) OCTA algorithm improves depth resolution by removing projection artifact while retaining in-situ flow signal from real blood vessels in deeper layers. This novel technology allowed us to study the normal retinal vasculature in vivo with better depth resolution than previously possible. Our investigation in normal human volunteers revealed the presence of 2 to 4 distinct vascular plexuses in the retina, depending on location relative to the optic disc and fovea. The vascular pattern in these retinal plexuses and interconnecting layers are consistent with previous histologic studies. Based on these data, we propose an improved system of nomenclature and segmentation boundaries for detailed 3-dimensional retinal vascular anatomy by OCTA. This could serve as a basis for future investigation of both normal retinal anatomy, as well as vascular malformations, nonperfusion, and neovascularization. PMID:28186181

  6. Detailed Vascular Anatomy of the Human Retina by Projection-Resolved Optical Coherence Tomography Angiography

    NASA Astrophysics Data System (ADS)

    Campbell, J. P.; Zhang, M.; Hwang, T. S.; Bailey, S. T.; Wilson, D. J.; Jia, Y.; Huang, D.

    2017-02-01

    Optical coherence tomography angiography (OCTA) is a noninvasive method of 3D imaging of the retinal and choroidal circulations. However, vascular depth discrimination is limited by superficial vessels projecting flow signal artifact onto deeper layers. The projection-resolved (PR) OCTA algorithm improves depth resolution by removing projection artifact while retaining in-situ flow signal from real blood vessels in deeper layers. This novel technology allowed us to study the normal retinal vasculature in vivo with better depth resolution than previously possible. Our investigation in normal human volunteers revealed the presence of 2 to 4 distinct vascular plexuses in the retina, depending on location relative to the optic disc and fovea. The vascular pattern in these retinal plexuses and interconnecting layers are consistent with previous histologic studies. Based on these data, we propose an improved system of nomenclature and segmentation boundaries for detailed 3-dimensional retinal vascular anatomy by OCTA. This could serve as a basis for future investigation of both normal retinal anatomy, as well as vascular malformations, nonperfusion, and neovascularization.

  7. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    PubMed

    Tang, Xiaoying; Oishi, Kenichi; Faria, Andreia V; Hillis, Argye E; Albert, Marilyn S; Mori, Susumu; Miller, Michael I

    2013-01-01

    This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  8. Segmentation of anatomical branching structures based on texture features and conditional random field

    NASA Astrophysics Data System (ADS)

    Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin

    2012-02-01

    This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.

  9. Parallel Fuzzy Segmentation of Multiple Objects*

    PubMed Central

    Garduño, Edgar; Herman, Gabor T.

    2009-01-01

    The usefulness of fuzzy segmentation algorithms based on fuzzy connectedness principles has been established in numerous publications. New technologies are capable of producing larger-and-larger datasets and this causes the sequential implementations of fuzzy segmentation algorithms to be time-consuming. We have adapted a sequential fuzzy segmentation algorithm to multi-processor machines. We demonstrate the efficacy of such a distributed fuzzy segmentation algorithm by testing it with large datasets (of the order of 50 million points/voxels/items): a speed-up factor of approximately five over the sequential implementation seems to be the norm. PMID:19444333

  10. Robust segmentation using non-parametric snakes with multiple cues for applications in radiation oncology

    NASA Astrophysics Data System (ADS)

    Kalpathy-Cramer, Jayashree; Ozertem, Umut; Hersh, William; Fuss, Martin; Erdogmus, Deniz

    2009-02-01

    Radiation therapy is one of the most effective treatments used in the treatment of about half of all people with cancer. A critical goal in radiation therapy is to deliver optimal radiation doses to the perceived tumor while sparing the surrounding healthy tissues. Radiation oncologists often manually delineate normal and diseased structures on 3D-CT scans, a time consuming task. We present a segmentation algorithm using non-parametric snakes and principal curves that can be used in an automatic or semi-supervised fashion. It provides fast segmentation that is robust with respect to noisy edges and does not require the user to optimize a variety of parameters, unlike many segmentation algorithms. It allows multiple cues to be incorporated easily for the purposes of estimating the edge probability density. These cues, including texture, intensity and shape priors, can be used simultaneously to delineate tumors and normal anatomy, thereby increasing the robustness of the algorithm. The notion of principal curves is used to interpolate between data points in sparse areas. We compare the results using a non-parametric snake technique with a gold standard consisting of manually delineated structures for tumors as well as normal organs.

  11. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  12. Clinical Anatomy of the Liver: Review of the 19th Meeting of the Japanese Research Society of Clinical Anatomy

    PubMed Central

    Sakamoto, Yoshihiro; Kokudo, Norihiro; Kawaguchi, Yoshikuni; Akita, Keiichi

    2017-01-01

    Precise clinical knowledge of liver anatomy is required to safely perform a hepatectomy, for both open and laparoscopic surgery. At the 19th meeting of the Japanese Research Society of Clinical Anatomy (JRSCA), we conducted special symposia on essential issues of liver surgery, such as the history of hepatic segmentation, the glissonean pedicle approach, application of 3-D imaging simulation and fluorescent imaging using indocyanine green solution, a variety of segmentectomies including caudate lobectomy, the associating liver partition and portal vein embolization for stage hepatectomy and harvesting liver grafts for living donor liver transplantation. The present review article provides useful information for liver surgeons and anatomic researchers. PMID:28275581

  13. Effect of blood vessel segmentation on the outcome of electroporation-based treatments of liver tumors.

    PubMed

    Marčan, Marija; Kos, Bor; Miklavčič, Damijan

    2015-01-01

    Electroporation-based treatments rely on increasing the permeability of the cell membrane by high voltage electric pulses applied to tissue via electrodes. To ensure that the whole tumor is covered with sufficiently high electric field, accurate numerical models are built based on individual patient anatomy. Extraction of patient's anatomy through segmentation of medical images inevitably produces some errors. In order to ensure the robustness of treatment planning, it is necessary to evaluate the potential effect of such errors on the electric field distribution. In this work we focus on determining the effect of errors in automatic segmentation of hepatic vessels on the electric field distribution in electroporation-based treatments in the liver. First, a numerical analysis was performed on a simple 'sphere and cylinder' model for tumors and vessels of different sizes and relative positions. Second, an analysis of two models extracted from medical images of real patients in which we introduced variations of an error of the automatic vessel segmentation method was performed. The results obtained from a simple model indicate that ignoring the vessels when calculating the electric field distribution can cause insufficient coverage of the tumor with electric fields. Results of this study indicate that this effect happens for small (10 mm) and medium-sized (30 mm) tumors, especially in the absence of a central electrode inserted in the tumor. The results obtained from the real-case models also show higher negative impact of automatic vessel segmentation errors on the electric field distribution when the central electrode is absent. However, the average error of the automatic vessel segmentation did not have an impact on the electric field distribution if the central electrode was present. This suggests the algorithm is robust enough to be used in creating a model for treatment parameter optimization, but with a central electrode.

  14. Three-dimensional segmentation of the tumor in computed tomographic images of neuroblastoma.

    PubMed

    Deglint, Hanford J; Rangayyan, Rangaraj M; Ayres, Fábio J; Boag, Graham S; Zuffo, Marcelo K

    2007-09-01

    Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, and normal tissue are often intermixed. Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to therapy and in the planning of the delayed surgery for resection of the tumor. We propose methods to achieve 3-dimensional segmentation of the neuroblastic tumor. In our scheme, some of the normal structures expected in abdominal CT images are delineated and removed from further consideration; the remaining parts of the image volume are then examined for tumor mass. Mathematical morphology, fuzzy connectivity, and other image processing tools are deployed for this purpose. Expert knowledge provided by a radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are incorporated into the segmentation algorithm. In this preliminary study, the methods were tested with 10 CT exams of four cases from the Alberta Children's Hospital. False-negative error rates of less than 12% were obtained in eight of 10 exams; however, seven of the exams had false-positive error rates of more than 20% with respect to manual segmentation of the tumor by a radiologist.

  15. Optimizing boundary detection via Simulated Search with applications to multi-modal heart segmentation.

    PubMed

    Peters, J; Ecabert, O; Meyer, C; Kneser, R; Weese, J

    2010-02-01

    Segmentation of medical images can be achieved with the help of model-based algorithms. Reliable boundary detection is a crucial component to obtain robust and accurate segmentation results and to enable full automation. This is especially important if the anatomy being segmented is too variable to initialize a mean shape model such that all surface regions are close to the desired contours. Several boundary detection algorithms are widely used in the literature. Most use some trained image appearance model to characterize and detect the desired boundaries. Although parameters of the boundary detection can vary over the model surface and are trained on images, their performance (i.e., accuracy and reliability of boundary detection) can only be assessed as an integral part of the entire segmentation algorithm. In particular, assessment of boundary detection cannot be done locally and independently on model parameterization and internal energies controlling geometric model properties. In this paper, we propose a new method for the local assessment of boundary detection called Simulated Search. This method takes any boundary detection function and evaluates its performance for a single model landmark in terms of an estimated geometric boundary detection error. In consequence, boundary detection can be optimized per landmark during model training. We demonstrate the success of the method for cardiac image segmentation. In particular we show that the Simulated Search improves the capture range and the accuracy of the boundary detection compared to a traditional training scheme. We also illustrate how the Simulated Search can be used to identify suitable classes of features when addressing a new segmentation task. Finally, we show that the Simulated Search enables multi-modal heart segmentation using a single algorithmic framework. On computed tomography and magnetic resonance images, average segmentation errors (surface-to-surface distances) for the four chambers and

  16. Anatomy of female continence.

    PubMed

    Sampselle, C M; DeLancey, J O

    1998-03-01

    Various muscle, connective tissue, and neurologic structures within the pelvic floor play critical roles in the maintenance of both urinary and fecal continence. Recent advances in technology, combined with greater precision during anatomic study, have expanded our understanding of the role played by the pelvic floor in maintaining continence. The goal of this article is to summarize recent research on female pelvic anatomy, with a particular emphasis on the evidence base related to urinary incontinence. The content is organized to accomplish three aims: (1) identify, within the context of pelvic floor anatomy, the structures that comprise the urinary continence system, (2) Describe the functional dynamics of urinary continence, including factors in resting urethral pressure and pressure transmission, and (3) Present the rationale, technique, and interpretation of various methods of measuring pelvic floor function.

  17. Authenticity in Anatomy Art.

    PubMed

    Adkins, Jessica

    2017-01-12

    The aim of this paper is to observe the evolution and evaluate the 'realness' and authenticity in Anatomy Art, an art form I define as one which incorporates accurate anatomical representations of the human body with artistic expression. I examine the art of 17th century wax anatomical models, the preservations of Frederik Ruysch, and Gunther von Hagens' Body Worlds plastinates, giving consideration to authenticity of both body and art. I give extra consideration to the works of Body Worlds since the exhibit creator believes he has created anatomical specimens with more educational value and bodily authenticity than ever before. Ultimately, I argue that von Hagens fails to offer Anatomy Art 'real human bodies,' and that the lack of bodily authenticity of his plastinates results in his creations being less pedagogic than he claims.

  18. Human ocular anatomy.

    PubMed

    Kels, Barry D; Grzybowski, Andrzej; Grant-Kels, Jane M

    2015-01-01

    We review the normal anatomy of the human globe, eyelids, and lacrimal system. This contribution explores both the form and function of numerous anatomic features of the human ocular system, which are vital to a comprehensive understanding of the pathophysiology of many oculocutaneous diseases. The review concludes with a reference glossary of selective ophthalmologic terms that are relevant to a thorough understanding of many oculocutaneous disease processes.

  19. Automatic segmentation of cartilage in high-field magnetic resonance images of the knee joint with an improved voxel-classification-driven region-growing algorithm using vicinity-correlated subsampling.

    PubMed

    Öztürk, Ceyda Nur; Albayrak, Songül

    2016-05-01

    Anatomical structures that can deteriorate over time, such as cartilage, can be successfully delineated with voxel-classification approaches in magnetic resonance (MR) images. However, segmentation via voxel-classification is a computationally demanding process for high-field MR images with high spatial resolutions. In this study, the whole femoral, tibial, and patellar cartilage compartments in the knee joint were automatically segmented in high-field MR images obtained from Osteoarthritis Initiative using a voxel-classification-driven region-growing algorithm with sample-expand method. Computational complexity of the classification was alleviated via subsampling of the background voxels in the training MR images and selecting a small subset of significant features by taking into consideration systems with limited memory and processing power. Although subsampling of the voxels may lead to a loss of generality of the training models and a decrease in segmentation accuracies, effective subsampling strategies can overcome these problems. Therefore, different subsampling techniques, which involve uniform, Gaussian, vicinity-correlated (VC) sparse, and VC dense subsampling, were used to generate four training models. The segmentation system was experimented using 10 training and 23 testing MR images, and the effects of different training models on segmentation accuracies were investigated. Experimental results showed that the highest mean Dice similarity coefficient (DSC) values for all compartments were obtained when the training models of VC sparse subsampling technique were used. Mean DSC values optimized with this technique were 82.6%, 83.1%, and 72.6% for femoral, tibial, and patellar cartilage compartments, respectively, when mean sensitivities were 79.9%, 84.0%, and 71.5%, and mean specificities were 99.8%, 99.9%, and 99.9%.

  20. [The French lessons of anatomy].

    PubMed

    Bouchet, Alain

    2003-01-01

    The "Lessons of Anatomy" can be considered as a step of Medicine to Art. For several centuries the exhibition of a corpse's dissection was printed on the title-page of published works. Since the seventeenth century, the "Lessons of Anatomy" became a picture on the title-page in order to highlight the well-known names of the european anatomists. The study is limited to the French Lessons of Anatomy found in books or pictures after the invention of printing.

  1. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  2. Neuro-Fuzzy Phasing of Segmented Mirrors

    NASA Technical Reports Server (NTRS)

    Olivier, Philip D.

    1999-01-01

    A new phasing algorithm for segmented mirrors based on neuro-fuzzy techniques is described. A unique feature of this algorithm is the introduction of an observer bank. Its effectiveness is tested in a very simple model with remarkable success. The new algorithm requires much less computational effort than existing algorithms and therefore promises to be quite useful when implemented on more complex models.

  3. Who Is Repeating Anatomy? Trends in an Undergraduate Anatomy Course

    ERIC Educational Resources Information Center

    Schutte, Audra F.

    2016-01-01

    Anatomy courses frequently serve as prerequisites or requirements for health sciences programs. Due to the challenging nature of anatomy, each semester there are students remediating the course (enrolled in the course for a second time), attempting to earn a grade competitive for admissions into a program of study. In this retrospective study,…

  4. [Pandora's box of anatomy].

    PubMed

    Weinberg, Uri; Reis, Shmuel

    2008-05-01

    Physicians in Nazi Germany were among the first to join the Nazi party and the SS, and were considered passionate and active supporters of the regime. Their actions included development and implementation of the racial theory thus legitimizing the development of the Nazi genocide plan, leadership and execution of the sterilization and euthanasia programs as well as atrocious human experimentation. Nazi law allowed the use of humans and their remains in research institutions. One of the physicians whose involvement in the Nazi regime was particularly significant was Eduard Pernkopf. He was the head of the Anatomy Institute at the University of Vienna, and later became the president of the university. Pernkopf was a member of the Nazi party, promoted the idea of "racial hygiene", and in 1938, "purified" the university from all Jews. In Pernkopfs atlas of anatomy, the illustrators expressed their sympathy to Nazism by adding Nazi symbols to their illustrations. In light of the demand stated by the "Yad Vashem" Institute, the sources of the atlas were investigated. The report, which was published in 1998, determined that Pernkopfs Anatomy Institute received almost 1400 corpses from the Gestapo's execution chambers. Copies of Pernkopfs atlas, accidentally exposed at the Rappaport School of Medicine in the Technion, led to dilemmas concerning similar works with a common background. The books initiated a wide debate in Israel and abroad, regarding ethical aspects of using information originated in Nazi crimes. Moreover, these findings are evidence of the evil to which science and medicine can give rise, when they are captured as an unshakable authority.

  5. EVENT SEGMENTATION

    PubMed Central

    Zacks, Jeffrey M.; Swallow, Khena M.

    2012-01-01

    One way to understand something is to break it up into parts. New research indicates that segmenting ongoing activity into meaningful events is a core component of ongoing perception, with consequences for memory and learning. Behavioral and neuroimaging data suggest that event segmentation is automatic and that people spontaneously segment activity into hierarchically organized parts and sub-parts. This segmentation depends on the bottom-up processing of sensory features such as movement, and on the top-down processing of conceptual features such as actors’ goals. How people segment activity affects what they remember later; as a result, those who identify appropriate event boundaries during perception tend to remember more and learn more proficiently. PMID:22468032

  6. Combining prior day contours to improve automated prostate segmentation

    SciTech Connect

    Godley, Andrew; Sheplan Olsen, Lawrence J.; Stephans, Kevin; Zhao Anzi

    2013-02-15

    Purpose: To improve the accuracy of automatically segmented prostate, rectum, and bladder contours required for online adaptive therapy. The contouring accuracy on the current image guidance [image guided radiation therapy (IGRT)] scan is improved by combining contours from earlier IGRT scans via the simultaneous truth and performance level estimation (STAPLE) algorithm. Methods: Six IGRT prostate patients treated with daily kilo-voltage (kV) cone-beam CT (CBCT) had their original plan CT and nine CBCTs contoured by the same physician. Three types of automated contours were produced for analysis. (1) Plan: By deformably registering the plan CT to each CBCT and then using the resulting deformation field to morph the plan contours to match the CBCT anatomy. (2) Previous: The contour set drawn by the physician on the previous day CBCT is similarly deformed to match the current CBCT anatomy. (3) STAPLE: The contours drawn by the physician, on each prior CBCT and the plan CT, are deformed to match the CBCT anatomy to produce multiple contour sets. These sets are combined using the STAPLE algorithm into one optimal set. Results: Compared to plan and previous, STAPLE improved the average Dice's coefficient (DC) with the original physician drawn CBCT contours to a DC as follows: Bladder: 0.81 {+-} 0.13, 0.91 {+-} 0.06, and 0.92 {+-} 0.06; Prostate: 0.75 {+-} 0.08, 0.82 {+-} 0.05, and 0.84 {+-} 0.05; and Rectum: 0.79 {+-} 0.06, 0.81 {+-} 0.06, and 0.85 {+-} 0.04, respectively. The STAPLE results are within intraobserver consistency, determined by the physician blindly recontouring a subset of CBCTs. Comparing plans recalculated using the physician and STAPLE contours showed an average disagreement less than 1% for prostate D98 and mean dose, and 5% and 3% for bladder and rectum mean dose, respectively. One scan takes an average of 19 s to contour. Using five scans plus STAPLE takes less than 110 s on a 288 core graphics processor unit. Conclusions: Combining the plan and

  7. An adaptive 3D region growing algorithm to automatically segment and identify thoracic aorta and its centerline using computed tomography angiography scans

    NASA Astrophysics Data System (ADS)

    Ferreira, F.; Dehmeshki, J.; Amin, H.; Dehkordi, M. E.; Belli, A.; Jouannic, A.; Qanadli, S.

    2010-03-01

    Thoracic Aortic Aneurysm (TAA) is a localized swelling of the thoracic aorta. The progressive growth of an aneurysm may eventually cause a rupture if not diagnosed or treated. This necessitates the need for an accurate measurement which in turn calls for the accurate segmentation of the aneurysm regions. Computer Aided Detection (CAD) is a tool to automatically detect and segment the TAA in the Computer tomography angiography (CTA) images. The fundamental major step of developing such a system is to develop a robust method for the detection of main vessel and measuring its diameters. In this paper we propose a novel adaptive method to simultaneously segment the thoracic aorta and to indentify its center line. For this purpose, an adaptive parametric 3D region growing is proposed in which its seed will be automatically selected through the detection of the celiac artery and the parameters of the method will be re-estimated while the region is growing thorough the aorta. At each phase of region growing the initial center line of aorta will also be identified and modified through the process. Thus the proposed method simultaneously detect aorta and identify its centerline. The method has been applied on CT images from 20 patients with good agreement with the visual assessment by two radiologists.

  8. Fully automated whole-head segmentation with improved smoothness and continuity, with theory reviewed.

    PubMed

    Huang, Yu; Parra, Lucas C

    2015-01-01

    Individualized current-flow models are needed for precise targeting of brain structures using transcranial electrical or magnetic stimulation (TES/TMS). The same is true for current-source reconstruction in electroencephalography and magnetoencephalography (EEG/MEG). The first step in generating such models is to obtain an accurate segmentation of individual head anatomy, including not only brain but also cerebrospinal fluid (CSF), skull and soft tissues, with a field of view (FOV) that covers the whole head. Currently available automated segmentation tools only provide results for brain tissues, have a limited FOV, and do not guarantee continuity and smoothness of tissues, which is crucially important for accurate current-flow estimates. Here we present a tool that addresses these needs. It is based on a rigorous Bayesian inference framework that combines image intensity model, anatomical prior (atlas) and morphological constraints using Markov random fields (MRF). The method is evaluated on 20 simulated and 8 real head volumes acquired with magnetic resonance imaging (MRI) at 1 mm3 resolution. We find improved surface smoothness and continuity as compared to the segmentation algorithms currently implemented in Statistical Parametric Mapping (SPM). With this tool, accurate and morphologically correct modeling of the whole-head anatomy for individual subjects may now be feasible on a routine basis. Code and data are fully integrated into SPM software tool and are made publicly available. In addition, a review on the MRI segmentation using atlas and the MRF over the last 20 years is also provided, with the general mathematical framework clearly derived.

  9. Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma

    NASA Astrophysics Data System (ADS)

    Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.

    2004-05-01

    Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.

  10. Deformable templates guided discriminative models for robust 3D brain MRI segmentation.

    PubMed

    Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen

    2013-10-01

    Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.

  11. Evaluation of multiatlas label fusion for in vivo magnetic resonance imaging orbital segmentation

    PubMed Central

    Panda, Swetasudha; Asman, Andrew J.; Khare, Shweta P.; Thompson, Lindsey; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2014-01-01

    Abstract. Multiatlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. We evaluate seven statistical and voting-based label fusion algorithms (and six additional variants) to segment the optic nerves, eye globes, and chiasm. For nonlocal simultaneous truth and performance level estimation (STAPLE), we evaluate different intensity similarity measures (including mean square difference, locally normalized cross-correlation, and a hybrid approach). Each algorithm is evaluated in terms of the Dice overlap and symmetric surface distance metrics. Finally, we evaluate refinement of label fusion results using a learning-based correction method for consistent bias correction and Markov random field regularization. The multiatlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients. Across all three structures, nonlocal spatial STAPLE (NLSS) with a mixed weighting type provided the most consistent results; for the optic nerve NLSS resulted in a median Dice similarity coefficient of 0.81, mean surface distance of 0.41 mm, and Hausdorff distance 2.18 mm for the optic nerves. Joint label fusion resulted in slightly superior median performance for the optic nerves (0.82, 0.39 mm, and 2.15 mm), but slightly worse on the globes. The fully automated multiatlas labeling approach provides robust segmentations of orbital structures on magnetic resonance imaging even in patients for whom significant atrophy (optic nerve head drusen) or inflammation (multiple sclerosis) is present. PMID:25558466

  12. Evaluation of Multi-Atlas Label Fusion for In Vivo MRI Orbital Segmentation.

    PubMed

    Panda, Swetasudha; Asman, Andrew J; Khare, Shweta P; Thompson, Lindsey; Mawn, Louise A; Smith, Seth A; Landman, Bennett A

    2014-07-18

    Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. We evaluate 7 statistical and voting-based label fusion algorithms (and 6 additional variants) to segment the optic nerves, eye globes and chiasm. For non-local STAPLE, we evaluate different intensity similarity measures (including mean square difference, locally normalized cross correlation, and a hybrid approach). Each algorithm is evaluated in terms of the Dice overlap and symmetric surface distance metrics. Finally, we evaluate refinement of label fusion results using a learning based correction method for consistent bias correction and Markov random field regularization. The multi-atlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients. Across all three structures, NLSS with a mixed weighting type provided the most consistent results; for the optic nerve NLSS resulted in a median Dice similarity coefficient of 0.81, mean surface distance of 0.41 mm and Hausdorff distance 2.18 mm for the optic nerves. Joint label fusion resulted in slightly superior median performance for the optic nerves (0.82, 0.39 mm and 2.15 mm), but slightly worse on the globes. The fully automated multi-atlas labeling approach provides robust segmentations of orbital structures on MRI even in patients for whom significant atrophy (optic nerve head drusen) or inflammation (multiple sclerosis) is present.

  13. Health Instruction Packages: Cardiac Anatomy.

    ERIC Educational Resources Information Center

    Phillips, Gwen; And Others

    Text, illustrations, and exercises are utilized in these five learning modules to instruct nurses, students, and other health care professionals in cardiac anatomy and functions and in fundamental electrocardiographic techniques. The first module, "Cardiac Anatomy and Physiology: A Review" by Gwen Phillips, teaches the learner to draw…

  14. Radiological sinonasal anatomy

    PubMed Central

    Alrumaih, Redha A.; Ashoor, Mona M.; Obidan, Ahmed A.; Al-Khater, Khulood M.; Al-Jubran, Saeed A.

    2016-01-01

    Objectives: To assess the prevalence of common radiological variants of sinonasal anatomy among Saudi population and compare it with the reported prevalence of these variants in other ethnic and population groups. Methods: This is a retrospective cross-sectional study of 121 computerized tomography scans of the nose and paranasal sinuses of patients presented with sinonasal symptoms to the Department of Otorhinolarngology, King Fahad Hospital of the University, Khobar, Saudi Arabia, between January 2014 and May 2014. Results: Scans of 121 patients fulfilled inclusion criteria were reviewed. Concha bullosa was found in 55.4%, Haller cell in 39.7%, and Onodi cell in 28.9%. Dehiscence of the internal carotid artery was found in 1.65%. Type-1 and type-2 optic nerve were the prevalent types. Type-II Keros classification of the depth of olfactory fossa was the most common among the sample (52.9%). Frontal cells were found in 79.3%; type I was the most common. Conclusions: There is a difference in the prevalence of some radiological variants of the sinonasal anatomy between Saudi population and other study groups. Surgeon must pay special attention in the preoperative assessment of patients with sinonasal pathology to avoid undesirable complications. PMID:27146614

  15. The quail anatomy portal

    PubMed Central

    Ruparelia, Avnika A.; Simkin, Johanna E.; Salgado, David; Newgreen, Donald F.; Martins, Gabriel G.; Bryson-Richardson, Robert J.

    2014-01-01

    The Japanese quail is a widely used model organism for the study of embryonic development; however, anatomical resources are lacking. The Quail Anatomy Portal (QAP) provides 22 detailed three-dimensional (3D) models of quail embryos during development from embryonic day (E)1 to E15 generated using optical projection tomography. The 3D models provided can be virtually sectioned to investigate anatomy. Furthermore, using the 3D nature of the models, we have generated a tool to assist in the staging of quail samples. Volume renderings of each stage are provided and can be rotated to allow visualization from multiple angles allowing easy comparison of features both between stages in the database and between images or samples in the laboratory. The use of JavaScript, PHP and HTML ensure the database is accessible to users across different operating systems, including mobile devices, facilitating its use in the laboratory.The QAP provides a unique resource for researchers using the quail model. The ability to virtually section anatomical models throughout development provides the opportunity for researchers to virtually dissect the quail and also provides a valuable tool for the education of students and researchers new to the field. Database URL: http://quail.anatomyportal.org (For review username: demo, password: quail123) PMID:24715219

  16. The quail anatomy portal.

    PubMed

    Ruparelia, Avnika A; Simkin, Johanna E; Salgado, David; Newgreen, Donald F; Martins, Gabriel G; Bryson-Richardson, Robert J

    2014-01-01

    The Japanese quail is a widely used model organism for the study of embryonic development; however, anatomical resources are lacking. The Quail Anatomy Portal (QAP) provides 22 detailed three-dimensional (3D) models of quail embryos during development from embryonic day (E)1 to E15 generated using optical projection tomography. The 3D models provided can be virtually sectioned to investigate anatomy. Furthermore, using the 3D nature of the models, we have generated a tool to assist in the staging of quail samples. Volume renderings of each stage are provided and can be rotated to allow visualization from multiple angles allowing easy comparison of features both between stages in the database and between images or samples in the laboratory. The use of JavaScript, PHP and HTML ensure the database is accessible to users across different operating systems, including mobile devices, facilitating its use in the laboratory.The QAP provides a unique resource for researchers using the quail model. The ability to virtually section anatomical models throughout development provides the opportunity for researchers to virtually dissect the quail and also provides a valuable tool for the education of students and researchers new to the field. DATABASE URL: http://quail.anatomyportal.org (For review username: demo, password: quail123).

  17. Hyperspectral image segmentation of the common bile duct

    NASA Astrophysics Data System (ADS)

    Samarov, Daniel; Wehner, Eleanor; Schwarz, Roderich; Zuzak, Karel; Livingston, Edward

    2013-03-01

    Over the course of the last several years hyperspectral imaging (HSI) has seen increased usage in biomedicine. Within the medical field in particular HSI has been recognized as having the potential to make an immediate impact by reducing the risks and complications associated with laparotomies (surgical procedures involving large incisions into the abdominal wall) and related procedures. There are several ongoing studies focused on such applications. Hyperspectral images were acquired during pancreatoduodenectomies (commonly referred to as Whipple procedures), a surgical procedure done to remove cancerous tumors involving the pancreas and gallbladder. As a result of the complexity of the local anatomy, identifying where the common bile duct (CBD) is can be difficult, resulting in comparatively high incidents of injury to the CBD and associated complications. It is here that HSI has the potential to help reduce the risk of such events from happening. Because the bile contained within the CBD exhibits a unique spectral signature, we are able to utilize HSI segmentation algorithms to help in identifying where the CBD is. In the work presented here we discuss approaches to this segmentation problem and present the results.

  18. Interacting with image hierarchies for fast and accurate object segmentation

    NASA Astrophysics Data System (ADS)

    Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark

    1994-05-01

    Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.

  19. Automated segmentation of pulmonary nodule depicted on CT images

    NASA Astrophysics Data System (ADS)

    Pu, Jiantao; Tan, Jun

    2011-03-01

    In this study, an efficient computational geometry approach is introduced to segment pulmonary nodules. The basic idea is to estimate the three-dimensional surface of a nodule in question by analyzing the shape characteristics of its surrounding tissues in geometric space. Given a seed point or a specific location where a suspicious nodule may be, three steps are involved in this approach. First, a sub-volume centered at this seed point is extracted and the contained anatomy structures are modeled in the form of a triangle mesh surface. Second, a "visibility" test combined with a shape classification algorithm based on principal curvature analysis removes surfaces determined not to belong to nodule boundaries by specific rules. This step results in a partial surface of a nodule boundary. Third, an interpolation / extrapolation based shape reconstruction procedure is used to estimate a complete nodule surface by representing the partial surface as an implicit function. The preliminary experiments on 158 annotated CT examinations demonstrated that this scheme could achieve a reasonable performance in nodule segmentation.

  20. Deep Learning Segmentation of Optical Microscopy Images Improves 3D Neuron Reconstruction.

    PubMed

    Li, Rongjian; Zeng, Tao; Peng, Hanchuan; Ji, Shuiwang

    2017-03-08

    Digital reconstruction, or tracing, of 3-dimensional (3D) neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this work, we proposed to use 3D Convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.

  1. Automatic segmentation of psoriasis lesions

    NASA Astrophysics Data System (ADS)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  2. Scorpion image segmentation system

    NASA Astrophysics Data System (ADS)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  3. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  4. Developmental anatomy of lampreys.

    PubMed

    Richardson, Michael K; Admiraal, Jeroen; Wright, Glenda M

    2010-02-01

    Lampreys are a group of aquatic chordates whose relationships to hagfishes and jawed vertebrates are still debated. Lamprey embryology is of interest to evolutionary biologists because it may shed light on vertebrate origins. For this and other reasons, lamprey embryology has been extensively researched by biologists from a range of disciplines. However, many of the key studies of lamprey comparative embryology are relatively inaccessible to the modern scientist. Therefore, in view of the current resurgence of interest in lamprey evolution and development, we present here a review of lamprey developmental anatomy. We identify several features of early organogenesis, including the origin of the nephric duct, that need to be re-examined with modern techniques. The homologies of several structures are also unclear, including the intriguing subendothelial pads in the heart. We hope that this review will form the basis for future studies into the phylogenetic embryology of this interesting group of animals.

  5. The Anatomy of Galaxies

    NASA Astrophysics Data System (ADS)

    D'Onofrio, Mauro; Rampazzo, Roberto; Zaggia, Simone; Longair, Malcolm S.; Ferrarese, Laura; Marziani, Paola; Sulentic, Jack W.; van der Kruit, Pieter C.; Laurikainen, Eija; Elmegreen, Debra M.; Combes, Françoise; Bertin, Giuseppe; Fabbiano, Giuseppina; Giovanelli, Riccardo; Calzetti, Daniela; Moss, David L.; Matteucci, Francesca; Djorgovski, Stanislav George; Fraix-Burnet, Didier; Graham, Alister W. McK.; Tully, Brent R.

    Just after WWII Astronomy started to live its "Golden Age", not differently to many other sciences and human activities, especially in the west side countries. The improved resolution of telescopes and the appearance of new efficient light detectors (e.g. CCDs in the middle eighty) greatly impacted the extragalactic researches. The first morphological analysis of galaxies were rapidly substituted by "anatomic" studies of their structural components, star and gas content, and in general by detailed investigations of their properties. As for the human anatomy, where the final goal was that of understanding the functionality of the organs that are essential for the life of the body, galaxies were dissected to discover their basic structural components and ultimately the mystery of their existence.

  6. Texture segmentation by genetic programming.

    PubMed

    Song, Andy; Ciesielski, Vic

    2008-01-01

    This paper describes a texture segmentation method using genetic programming (GP), which is one of the most powerful evolutionary computation algorithms. By choosing an appropriate representation texture, classifiers can be evolved without computing texture features. Due to the absence of time-consuming feature extraction, the evolved classifiers enable the development of the proposed texture segmentation algorithm. This GP based method can achieve a segmentation speed that is significantly higher than that of conventional methods. This method does not require a human expert to manually construct models for texture feature extraction. In an analysis of the evolved classifiers, it can be seen that these GP classifiers are not arbitrary. Certain textural regularities are captured by these classifiers to discriminate different textures. GP has been shown in this study as a feasible and a powerful approach for texture classification and segmentation, which are generally considered as complex vision tasks.

  7. Hierarchical image segmentation for learning object priors

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J; Li, Nan

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  8. Livewire based single still image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Yang, Rong; Liu, Xiaomao; Yue, Hao; Zhu, Hao; Tian, Dandan; Chen, Shu; Li, Yiquan; Tian, Jinwen

    2011-11-01

    In the application of the video contactless measurement, the quality of the image taken from underwater is not very well. It is well known that automatic image segmental method cannot provide acceptable segmentation result with low quality single still image. Snake algorithm can provide better result in this case with the aiding of human. However, sometimes the segmental result of Snake may far from the initial segmental contour drawn by user. Livewire algorithm can keep the location of the seed points that user selected nailed from the beginning to the end. But the contour may have burrs when the image's noise is quite high and the contrast is low. In this paper, we modified the cost function of Livewire algorithm and proposed a new segmentation method that can be used for single still image segmentation with high noise and low contrast.

  9. A new osteophyte segmentation algorithm using partial shape model and its applications to rabbit femur anterior cruciate ligament transection via micro-CT imaging.

    PubMed

    Saha, P K; Liang, G; Elkins, J M; Coimbra, A; Duong, L T; Williams, D S; Sonka, M

    2011-08-01

    Osteophyte is an additional bony growth on a normal bone surface limiting or stopping motion at a deteriorating joint. Detection and quantification of osteophytes from CT images is helpful in assessing disease status as well as treatment and surgery planning. However, it is difficult to distinguish between osteophytes and healthy bones using simple thresholding or edge/texture features due to the similarity of their material composition. In this paper, we present a new method primarily based active shape model (ASM) to solve this problem and evaluate its application to anterior cruciate ligament transection (ACLT) rabbit femur model via CT imaging. The common idea behind most ASM based segmentation methods is to first build a parametric shape model from a training dataset and apply the model to find a shape instance in a target image. A common challenge with such approaches is that a diseased bone shape is significantly altered at regions with osteophyte deposition misguiding an ASM method and eventually leading to suboptimum segmentations. This difficulty is overcome using a new partial ASM method that uses bone shape over healthy regions and extrapolates it over the diseased region according to the underlying shape model. Finally, osteophytes are segmented by subtracting partial-ASM derived shape from the overall diseased shape. Also, a new semi-automatic method is presented in this paper for efficiently building a 3D shape model for an anatomic region using manual reference of a few anatomically defined fiducial landmarks that are highly reproducible on individuals. Accuracy of the method has been examined on simulated phantoms while reproducibility and sensitivity have been evaluated on CT images of 2-, 4- and 8-week post-ACLT and sham-treated rabbit femurs. Experimental results have shown that the method is highly accurate ( R2 = 0.99), reproducible (ICC = 0.97), and sensitive in detecting disease progression (p-values: 0.065,0.001 and < 0.001 for 2- vs. 4, 4

  10. Modified Recursive Hierarchical Segmentation of Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2006-01-01

    An algorithm and a computer program that implements the algorithm that performs recursive hierarchical segmentation (RHSEG) of data have been developed. While the current implementation is for two-dimensional data having spatial characteristics (e.g., image, spectral, or spectral-image data), the generalized algorithm also applies to three-dimensional or higher dimensional data and also to data with no spatial characteristics. The algorithm and software are modified versions of a prior RHSEG algorithm and software, the outputs of which often contain processing-window artifacts including, for example, spurious segmentation-image regions along the boundaries of processing-window edges.

  11. Clinical anatomy of the hand.

    PubMed

    Vargas, Angélica; Chiapas-Gasca, Karla; Hernández-Díaz, Cristina; Canoso, Juan J; Saavedra, Miguel Ángel; Navarro-Zarza, José Eduardo; Villaseñor-Ovies, Pablo; Kalish, Robert A

    This article reviews the underlying anatomy of trigger finger and thumb (fibrous digital pulleys, sesamoid bones), flexor tenosynovitis, de Quervain's syndrome, Dupuytren's contracture, some hand deformities in rheumatoid arthritis, the carpal tunnel syndrome and the ulnar nerve compression at Guyon's canal. Some important syndromes and structures have not been included but such are the nature of these seminars. Rather than being complete, we aim at creating a system in which clinical cases are used to highlight the pertinent anatomy and, in the most important part of the seminar, these pertinent items are demonstrated by cross examination of participants and teachers. Self learning is critical for generating interest and expanding knowledge of clinical anatomy. Just look at your own hand in various positions, move it, feel it, feel also your forearms while you move the fingers, do this repeatedly and inquisitively and after a few tries you will have developed not only a taste, but also a lifelong interest in clinical anatomy.

  12. Document segmentation via oblique cuts

    NASA Astrophysics Data System (ADS)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  13. Multiatlas segmentation as nonparametric regression.

    PubMed

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.

  14. Multiatlas Segmentation as Nonparametric Regression

    PubMed Central

    Awate, Suyash P.; Whitaker, Ross T.

    2015-01-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  15. The anatomy of the mermaid.

    PubMed

    Heppell, D

    Investigation of the anatomy of the mermaid and of mermaid lore has revealed a tangled web of stories, sightings and specimens of the most diverse nature, extending worldwide into the realms of folklore and legend, zoology and cryptozoology, anatomy, physiology, radiography and folk medicine, ethnography, social history and the history of science. The stereotype we know as the mermaid is surely a fit subject for further serious study

  16. A cross validation study of deep brain stimulation targeting: from experts to atlas-based, segmentation-based and automatic registration algorithms.

    PubMed

    Castro, F Javier Sanchez; Pollo, Claudio; Meuli, Reto; Maeder, Philippe; Cuisenaire, Olivier; Cuadra, Meritxell Bach; Villemure, Jean-Guy; Thiran, Jean-Philippe

    2006-11-01

    Validation of image registration algorithms is a difficult task and open-ended problem, usually application-dependent. In this paper, we focus on deep brain stimulation (DBS) targeting for the treatment of movement disorders like Parkinson's disease and essential tremor. DBS involves implantation of an electrode deep inside the brain to electrically stimulate specific areas shutting down the disease's symptoms. The subthalamic nucleus (STN) has turned out to be the optimal target for this kind of surgery. Unfortunately, the STN is in general not clearly distinguishable in common medical imaging modalities. Usual techniques to infer its location are the use of anatomical atlases and visible surrounding landmarks. Surgeons have to adjust the electrode intraoperatively using electrophysiological recordings and macrostimulation tests. We constructed a ground truth derived from specific patients whose STNs are clearly visible on magnetic resonance (MR) T2-weighted images. A patient is chosen as atlas both for the right and left sides. Then, by registering each patient with the atlas using different methods, several estimations of the STN location are obtained. Two studies are driven using our proposed validation scheme. First, a comparison between different atlas-based and nonrigid registration algorithms with a evaluation of their performance and usability to locate the STN automatically. Second, a study of which visible surrounding structures influence the STN location. The two studies are cross validated between them and against expert's variability. Using this scheme, we evaluated the expert's ability against the estimation error provided by the tested algorithms and we demonstrated that automatic STN targeting is possible and as accurate as the expert-driven techniques currently used. We also show which structures have to be taken into account to accurately estimate the STN location.

  17. Template characterization and correlation algorithm created from segmentation for the iris biometric authentication based on analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Among the most used biometric signals to set personal security permissions, taker increasingly importance biometric iris recognition based on their textures and images of blood vessels due to the rich in these two unique characteristics that are unique to each individual. This paper presents an implementation of an algorithm characterization and correlation of templates created for biometric authentication based on iris texture analysis programmed on a FPGA (Field Programmable Gate Array), authentication is based on processes like characterization methods based on frequency analysis of the sample, and frequency correlation to obtain the expected results of authentication.

  18. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  19. Spinal Cord Segmentation by One Dimensional Normalized Template Matching: A Novel, Quantitative Technique to Analyze Advanced Magnetic Resonance Imaging Data.

    PubMed

    Cadotte, Adam; Cadotte, David W; Livne, Micha; Cohen-Adad, Julien; Fleet, David; Mikulis, David; Fehlings, Michael G

    2015-01-01

    Spinal cord segmentation is a developing area of research intended to aid the processing and interpretation of advanced magnetic resonance imaging (MRI). For example, high resolution three-dimensional volumes can be segmented to provide a measurement of spinal cord atrophy. Spinal cord segmentation is difficult due to the variety of MRI contrasts and the variation in human anatomy. In this study we propose a new method of spinal cord segmentation based on one-dimensional template matching and provide several metrics that can be used to compare with other segmentation methods. A set of ground-truth data from 10 subjects was manually-segmented by two different raters. These ground truth data formed the basis of the segmentation algorithm. A user was required to manually initialize the spinal cord center-line on new images, taking less than one minute. Template matching was used to segment the new cord and a refined center line was calculated based on multiple centroids within the segmentation. Arc distances down the spinal cord and cross-sectional areas were calculated. Inter-rater validation was performed by comparing two manual raters (n = 10). Semi-automatic validation was performed by comparing the two manual raters to the semi-automatic method (n = 10). Comparing the semi-automatic method to one of the raters yielded a Dice coefficient of 0.91 +/- 0.02 for ten subjects, a mean distance between spinal cord center lines of 0.32 +/- 0.08 mm, and a Hausdorff distance of 1.82 +/- 0.33 mm. The absolute variation in cross-sectional area was comparable for the semi-automatic method versus manual segmentation when compared to inter-rater manual segmentation. The results demonstrate that this novel segmentation method performs as well as a manual rater for most segmentation metrics. It offers a new approach to study spinal cord disease and to quantitatively track changes within the spinal cord in an individual case and across cohorts of subjects.

  20. Automated area segmentation for ocean bottom surveys

    NASA Astrophysics Data System (ADS)

    Hyland, John C.; Smith, Cheryl M.

    2015-05-01

    In practice, environmental information about an ocean bottom area to be searched using SONAR is often known a priori to some coarse level of resolution. The SONAR search sensor then typically has a different performance characterization function for each environmental classification. Large ocean bottom surveys using search SONAR can pose some difficulties when the environmental conditions vary significantly over the search area because search planning tools cannot adequately segment the area into sub-regions of homogeneous search sensor performance. Such segmentation is critically important to unmanned search vehicles; homogenous bottom segmentation will result in more accurate predictions of search performance and area coverage rate. The Naval Surface Warfare Center, Panama City Division (NSWC PCD) has developed an automated area segmentation algorithm that subdivides the mission area under the constraint that the variation of the search sensor's performance within each sub-mission area cannot exceed a specified threshold, thereby creating sub-regions of homogeneous sensor performance. The algorithm also calculates a new, composite sensor performance function for each sub-mission area. The technique accounts for practical constraints such as enforcing a minimum sub-mission area size and requiring sub-mission areas to be rectangular. Segmentation occurs both across the rows and down the columns of the mission area. Ideally, mission planning should consider both segmentation directions and choose the one with the more favorable result. The Automated Area Segmentation Algorithm was tested using two a priori bottom segmentations: rectangular and triangular; and two search sensor configurations: a set of three bi-modal curves and a set of three uni-modal curves. For each of these four scenarios, the Automated Area Segmentation Algorithm automatically partitioned the mission area across rows and down columns to create regions with homogeneous sensor performance. The

  1. Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets

    PubMed Central

    Zawadzki, Robert J.; Fuller, Alfred R.; Wiley, David F.; Hamann, Bernd; Choi, Stacey S.; Werner, John S.

    2008-01-01

    Recent developments in Fourier domain—optical coherence tomography (Fd-OCT) have increased the acquisition speed of current ophthalmic Fd-OCT instruments sufficiently to allow the acquisition of volumetric data sets of human retinas in a clinical setting. The large size and three-dimensional (3D) nature of these data sets require that intelligent data processing, visualization, and analysis tools are used to take full advantage of the available information. Therefore, we have combined methods from volume visualization, and data analysis in support of better visualization and diagnosis of Fd-OCT retinal volumes. Custom-designed 3D visualization and analysis software is used to view retinal volumes reconstructed from registered B-scans. We use a support vector machine (SVM) to perform semiautomatic segmentation of retinal layers and structures for subsequent analysis including a comparison of measured layer thicknesses. We have modified the SVM to gracefully handle OCT speckle noise by treating it as a characteristic of the volumetric data. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases. PMID:17867795

  2. Anatomy of trisomy 18.

    PubMed

    Roberts, Wallisa; Zurada, Anna; Zurada-ZieliŃSka, Agnieszka; Gielecki, Jerzy; Loukas, Marios

    2016-07-01

    Trisomy 18 is the second most common aneuploidy after trisomy 21. Due to its multi-systemic defects, it has a poor prognosis with a 50% chance of survival beyond one week and a <10% chance of survival beyond one year of life. However, this prognosis has been challenged by the introduction of aggressive interventional therapies for patients born with trisomy 18. As a result, a review of the anatomy associated with this defect is imperative. While any of the systems can be affected by trisomy 18, the following areas are the most likely to be affected: craniofacial, musculoskeletal system, cardiac system, abdominal, and nervous system. More specifically, the following features are considered characteristic of trisomy 18: low-set ears, rocker bottom feet, clenched fists, and ventricular septal defect. Of particular interest is the associated cardiac defect, as surgical repairs of these defects have shown an improved survivability. In this article, the anatomical defects associated with each system are reviewed. Clin. Anat. 29:628-632, 2016. © 2016 Wiley Periodicals, Inc.

  3. Penile embryology and anatomy.

    PubMed

    Yiee, Jenny H; Baskin, Laurence S

    2010-06-29

    Knowledge of penile embryology and anatomy is essential to any pediatric urologist in order to fully understand and treat congenital anomalies. Sex differentiation of the external genitalia occurs between the 7th and 17th weeks of gestation. The Y chromosome initiates male differentiation through the SRY gene, which triggers testicular development. Under the influence of androgens produced by the testes, external genitalia then develop into the penis and scrotum. Dorsal nerves supply penile skin sensation and lie within Buck's fascia. These nerves are notably absent at the 12 o'clock position. Perineal nerves supply skin sensation to the ventral shaft skin and frenulum. Cavernosal nerves lie within the corpora cavernosa and are responsible for sexual function. Paired cavernosal, dorsal, and bulbourethral arteries have extensive anastomotic connections. During erection, the cavernosal artery causes engorgement of the cavernosa, while the deep dorsal artery leads to glans enlargement. The majority of venous drainage occurs through a single, deep dorsal vein into which multiple emissary veins from the corpora and circumflex veins from the spongiosum drain. The corpora cavernosa and spongiosum are all made of spongy erectile tissue. Buck's fascia circumferentially envelops all three structures, splitting into two leaves ventrally at the spongiosum. The male urethra is composed of six parts: bladder neck, prostatic, membranous, bulbous, penile, and fossa navicularis. The urethra receives its blood supply from both proximal and distal directions.

  4. Anatomy of an incident

    SciTech Connect

    Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.; Land, Whitney M.; Schreiber, Stephen B.

    2016-03-23

    A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identified as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.

  5. Anatomy of an incident

    DOE PAGES

    Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.; ...

    2016-03-23

    A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less

  6. Adaptive textural segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Kuklinski, Walter S.; Frost, Gordon S.; MacLaughlin, Thomas

    1992-06-01

    A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.

  7. Obscuring surface anatomy in volumetric imaging data.

    PubMed

    Milchenko, Mikhail; Marcus, Daniel

    2013-01-01

    The identifying or sensitive anatomical features in MR and CT images used in research raise patient privacy concerns when such data are shared. In order to protect human subject privacy, we developed a method of anatomical surface modification and investigated the effects of such modification on image statistics and common neuroimaging processing tools. Common approaches to obscuring facial features typically remove large portions of the voxels. The approach described here focuses on blurring the anatomical surface instead, to avoid impinging on areas of interest and hard edges that can confuse processing tools. The algorithm proceeds by extracting a thin boundary layer containing surface anatomy from a region of interest. This layer is then "stretched" and "flattened" to fit into a thin "box" volume. After smoothing along a plane roughly parallel to anatomy surface, this volume is transformed back onto the boundary layer of the original data. The above method, named normalized anterior filtering, was coded in MATLAB and applied on a number of high resolution MR and CT scans. To test its effect on automated tools, we compared the output of selected common skull stripping and MR gain field correction methods used on unmodified and obscured data. With this paper, we hope to improve the understanding of the effect of surface deformation approaches on the quality of de-identified data and to provide a useful de-identification tool for MR and CT acquisitions.

  8. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  9. A Literature Review of Renal Surgical Anatomy and Surgical Strategies for Partial Nephrectomy

    PubMed Central

    Klatte, Tobias; Ficarra, Vincenzo; Gratzke, Christian; Kaouk, Jihad; Kutikov, Alexander; Macchi, Veronica; Mottrie, Alexandre; Porpiglia, Francesco; Porter, James; Rogers, Craig G.; Russo, Paul; Thompson, R. Houston; Uzzo, Robert G.; Wood, Christopher G.; Gill, Inderbir S.

    2016-01-01

    Context A detailed understanding of renal surgical anatomy is necessary to optimize preoperative planning and operative technique and provide a basis for improved outcomes. Objective To evaluate the literature regarding pertinent surgical anatomy of the kidney and related structures, nephrometry scoring systems, and current surgical strategies for partial nephrectomy (PN). Evidence acquisition A literature review was conducted. Evidence synthesis Surgical renal anatomy fundamentally impacts PN surgery. The renal artery divides into anterior and posterior divisions, from which approximately five segmental terminal arteries originate. The renal veins are not terminal. Variations in the vascular and lymphatic channels are common; thus, concurrent lymphadenectomy is not routinely indicated during PN for cT1 renal masses in the setting of clinically negative lymph nodes. Renal-protocol contrast-enhanced computed tomography or magnetic resonance imaging is used for standard imaging. Anatomy-based nephrometry scoring systems allow standardized academic reporting of tumor characteristics and predict PN outcomes (complications, remnant function, possibly histology). Anatomy-based novel surgical approaches may reduce ischemic time during PN; these include early unclamping, segmental clamping, tumor-specific clamping (zero ischemia), and unclamped PN. Cancer cure after PN relies on complete resection, which can be achieved by thin margins. Post-PN renal function is impacted by kidney quality, remnant quantity, and ischemia type and duration. Conclusions Surgical renal anatomy underpins imaging, nephrometry scoring systems, and vascular control techniques that reduce global renal ischemia and may impact post-PN function. A contemporary ideal PN excises the tumor with a thin negative margin, delicately secures the tumor bed to maximize vascularized remnant parenchyma, and minimizes global ischemia to the renal remnant with minimal complications. Patient summary In this report

  10. Contour detection and hierarchical image segmentation.

    PubMed

    Arbeláez, Pablo; Maire, Michael; Fowlkes, Charless; Malik, Jitendra

    2011-05-01

    This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.

  11. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    PubMed

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-08

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates.

  12. An anatomy precourse enhances student learning in veterinary anatomy.

    PubMed

    McNulty, Margaret A; Stevens-Sparks, Cathryn; Taboada, Joseph; Daniel, Annie; Lazarus, Michelle D

    2016-07-08

    Veterinary anatomy is often a source of trepidation for many students. Currently professional veterinary programs, similar to medical curricula, within the United States have no admission requirements for anatomy as a prerequisite course. The purpose of the current study was to evaluate the impact of a week-long precourse in veterinary anatomy on both objective student performance and subjective student perceptions of the precourse educational methods. Incoming first year veterinary students in the Louisiana State University School of Veterinary Medicine professional curriculum were asked to participate in a free precourse before the start of the semester, covering the musculoskeletal structures of the canine thoracic limb. Students learned the material either via dissection only, instructor-led demonstrations only, or a combination of both techniques. Outcome measures included student performance on examinations throughout the first anatomy course of the professional curriculum as compared with those who did not participate in the precourse. This study found that those who participated in the precourse did significantly better on examinations within the professional anatomy course compared with those who did not participate. Notably, this significant improvement was also identified on the examination where both groups were exposed to the material for the first time together, indicating that exposure to a small portion of veterinary anatomy can impact learning of anatomical structures beyond the immediate scope of the material previously learned. Subjective data evaluation indicated that the precourse was well received and students preferred guided learning via demonstrations in addition to dissection as opposed to either method alone. Anat Sci Educ 9: 344-356. © 2015 American Association of Anatomists.

  13. Discriminative parameter estimation for random walks segmentation.

    PubMed

    Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan

    2013-01-01

    The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.

  14. Papilian's anatomy - celebrating six decades.

    PubMed

    Dumitraşcu, Dinu Iuliu; Crivii, Carmen Bianca; Opincariu, Iulian

    2017-01-01

    Victor Papilian was born an artist, during high school he studied music in order to become a violinist in two professional orchestras in Bucharest. Later on he enrolled in the school of medicine, being immediately attracted by anatomy. After graduating, with a briliant dissertation, he became a member of the faculty and continued to teach in his preferred field. His masters, Gh. Marinescu and Victor Babes, proposed him for the position of professor at the newly established Faculty of Medicine of Cluj. Here he reorganized the department radically, created an anatomy museum and edited the first dissection handbook and the first Romanian anatomy (descriptive and topographic) treatise, both books received with great appreciation. He received the Romanian Academy Prize. His knowledge and skills gained him a well deserved reputation and he created a prestigious school of anatomy. He published over 250 scientific papers in national and international journals, ranging from morphology to functional, pathological and anthropological topics. He founded the Society of Anthropology, with its own newsletter; he was elected as a member of the French Society of Anatomy. In parallel he had a rich artistic and cultural activity as writer and playwright: he was president of the Transylvanian Writers' Society, editor of a literary review, director of the Cluj theater and opera, leader of a book club and founder of a symphony orchestra.

  15. [History of anatomy in Lyon].

    PubMed

    Bouchet, A

    1978-06-01

    1. We know very little concerning the teaching of anatomy during the Middle Ages. Only two authors, who both came to live in Lyon, Lanfranc and Guy de Chauliac, wrote on the subject. On the other hand, the important development of printing in Lyon from the sixteenth century onwards, made it possible to spread the translations of classic works and most of the books on Anatomy of the Renaissance. 2. However, Lyonese Anatomy developed very slowly because hospital training was more often badly organized. The only true supporter of Anatomy has been Marc Antoine Petit, chief surgeon of the Hôtel-Dieu before the French Revolution. 3. Apart from the parallel but only transient teaching of the Royal College of Surgery, one will have to wait for the creation of an official teaching first assumed by "schools" (secondary school and preparatory school) and finally by the Faculty of Medicine created in 1877. The names of Testut and of Latarjet contributed to the reknown of the Faculty of Medicine by their anatomical studies of great value for several generations of students. 4. Recently the Faculty of Medicine has been divided into four "universities". The new buildings are larger. The "gift of corpses" has brought a remedy to the shortage of the last twenty years. Anatomical research can be pursued thanks to micro-anatomy and bio-mechanics while conventional teaching is completed by dissection.

  16. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  17. Modeling segmentation performance in NV-IPM

    NASA Astrophysics Data System (ADS)

    Lies, Micah J.; Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Imaging sensors produce images whose primary use is to convey information to human operators. However, their proliferation has resulted in an overload of information. As a result, computational algorithms are being increasingly implemented to simplify an operator's task or to eliminate the human operator altogether. Predicting the effect of algorithms on task performance is currently cumbersome requiring estimates of the effects of an algorithm on the blurring and noise, and "shoe-horning" these effects into existing models. With the increasing use of automated algorithms with imaging sensors, a fully integrated approach is desired. While specific implementation algorithms differ, general tasks can be identified that form building blocks of a wide range of possible algorithms. Those tasks are segmentation of objects from the spatio-temporal background, object tracking over time, feature extraction, and transformation of features into human usable information. In this paper research is conducted with the purpose of developing a general performance model for segmentation algorithms based on image quality. A database of pristine imagery has been developed in which there is a wide variety of clearly defined regions with respect to shape, size, and inherent contrast. Both synthetic and "natural" images make up the database. Each image is subjected to various amounts of blur and noise. Metrics for the accuracy of segmentation have been developed and measured for each image and segmentation algorithm. Using the computed metric values and the known values of blur and noise, a model of performance for segmentation is being developed. Preliminary results are reported.

  18. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    SciTech Connect

    Ahunbay, E; Li, X; Moreau, M

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.

  19. Anal anatomy and normal histology.

    PubMed

    Pandey, Priti

    2012-12-01

    The focus of this article is the anatomy and histology of the anal canal, and its clinical relevance to anal cancers. The article also highlights the recent histological and anatomical changes to the traditional terminology of the anal canal. The terminology has been adopted by the American Joint Committee on Cancer, separating the anal region into the anal canal, the perianal region and the skin. This paper describes the gross anatomy of the anal canal, along with its associated blood supply, venous and lymphatic drainage, and nerve supply. The new terminology referred to in this article may assist clinicians and health care providers to identify lesions more precisely through naked eye observation and without the need for instrumentation. Knowledge of the regional anatomy of the anus will also assist in management decisions.

  20. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    SciTech Connect

    Zhen, X; Chen, H; Zhou, L; Yan, H; Jiang, S; Jia, X; Gu, X; Mell, L; Yashar, C; Cervino, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the random walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no

  1. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth.

    PubMed

    Gholipour, Ali; Rollins, Caitlin K; Velasco-Annis, Clemente; Ouaalam, Abdelhakim; Akhondi-Asl, Alireza; Afacan, Onur; Ortinau, Cynthia M; Clancy, Sean; Limperopoulos, Catherine; Yang, Edward; Estroff, Judy A; Warfield, Simon K

    2017-03-28

    Longitudinal characterization of early brain growth in-utero has been limited by a number of challenges in fetal imaging, the rapid change in size, shape and volume of the developing brain, and the consequent lack of suitable algorithms for fetal brain image analysis. There is a need for an improved digital brain atlas of the spatiotemporal maturation of the fetal brain extending over the key developmental periods. We have developed an algorithm for construction of an unbiased four-dimensional atlas of the developing fetal brain by integrating symmetric diffeomorphic deformable registration in space with kernel regression in age. We applied this new algorithm to construct a spatiotemporal atlas from MRI of 81 normal fetuses scanned between 19 and 39 weeks of gestation and labeled the structures of the developing brain. We evaluated the use of this atlas and additional individual fetal brain MRI atlases for completely automatic multi-atlas segmentation of fetal brain MRI. The atlas is available online as a reference for anatomy and for registration and segmentation, to aid in connectivity analysis, and for groupwise and longitudinal analysis of early brain growth.

  2. [Imaging anatomy of cranial nerves].

    PubMed

    Hermier, M; Leal, P R L; Salaris, S F; Froment, J-C; Sindou, M

    2009-04-01

    Knowledge of the anatomy of the cranial nerves is mandatory for optimal radiological exploration and interpretation of the images in normal and pathological conditions. CT is the method of choice for the study of the skull base and its foramina. MRI explores the cranial nerves and their vascular relationships precisely. Because of their small size, it is essential to obtain images with high spatial resolution. The MRI sequences optimize contrast between nerves and surrounding structures (cerebrospinal fluid, fat, bone structures and vessels). This chapter discusses the radiological anatomy of the cranial nerves.

  3. Anatomy Adventure: A Board Game for Enhancing Understanding of Anatomy

    ERIC Educational Resources Information Center

    Anyanwu, Emeka G.

    2014-01-01

    Certain negative factors such as fear, loss of concentration and interest in the course, lack of confidence, and undue stress have been associated with the study of anatomy. These are factors most often provoked by the unusually large curriculum, nature of the course, and the psychosocial impact of dissection. As a palliative measure, Anatomy…

  4. The Anatomy of Anatomy: A Review for Its Modernization

    ERIC Educational Resources Information Center

    Sugand, Kapil; Abrahams, Peter; Khurana, Ashish

    2010-01-01

    Anatomy has historically been a cornerstone in medical education regardless of nation or specialty. Until recently, dissection and didactic lectures were its sole pedagogy. Teaching methodology has been revolutionized with more reliance on models, imaging, simulation, and the Internet to further consolidate and enhance the learning experience.…

  5. Image Segmentation Using Hierarchical Merge Tree.

    PubMed

    Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-07-18

    This paper investigates one of the most fundamental computer vision problems: image segmentation. We propose a supervised hierarchical approach to object-independent image segmentation. Starting with over-segmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes. We formulate the tree structure as a constrained conditional model to associate region merging with likelihoods predicted using an ensemble boundary classifier. Final segmentations can then be inferred by finding globally optimal solutions to the model efficiently. We also present an iterative training and testing algorithm that generates various tree structures and combines them to emphasize accurate boundaries by segmentation accumulation. Experiment results and comparisons with other recent methods on six public data sets demonstrate that our approach achieves state-of-the-art region accuracy and is competitive in image segmentation without semantic priors.

  6. Microscopic anatomy of brachial plexus branches in Wistar rats.

    PubMed

    Santos, Ana Paula; Suaid, Carla Adelino; Fazan, Valéria Paula Sassoli; Barreira, Amilton Antunes

    2007-05-01

    In the present study, we analyze the morphology and morphometry of the lateral proper digital nerve of the third finger, and of the proximal and distal segments of the ulnar, median, and radial nerves, in Wistar rats 4 or 7 weeks old. The fascicular area and diameter were generally significantly greater in the proximal compared to distal segments and tended to be larger in 7-week-old compared to 4-week-old rats (e.g., median nerve area of 0.13 mm(2) for the proximal and 0.07 mm(2) for distal segments in 4-week-old rats, and 0.17 and 0.10 mm(2), respectively, for the proximal and distal segments of 7-week-old rats). The number of fascicles was significantly greater while the number of myelinated fibers was significantly less in the distal segments (e.g., 1,359 and 509 myelinated fibers, respectively, in the proximal and distal segments of the radial nerve 4-week-old rats). There was no significant difference in these parameters between the two age groups. The diameter of the myelinated fibers and their respective axons increased from 4 to 7 weeks of age (e.g., myelinated fiber diameter of 4.10 microm in 4-week-old animals and 4.7 microm in the ulnar nerve proximal segment of 7-week-old rats). The g-ratio regression line (axon diameter vs. fiber diameter quotient) was outlined for all the nerves studied here. Differences in myelinated fiber density were detected between the segments of the radial nerve, accompanying the number of myelinated fibers. Detailed knowledge of the microscopic anatomy of rat forelimb nerves provides control data for comparison with studies of experimentally induced neuropathies, which can shed more light on human neuropathies.

  7. Anatomy of the thymus gland.

    PubMed

    Safieddine, Najib; Keshavjee, Shaf

    2011-05-01

    In the case of the thymus gland, the most common indications for resection are myasthenia gravis or thymoma. The consistency and appearance of the thymus gland make it difficult at times to discern from mediastinal fatty tissues. Having a clear understanding of the anatomy and the relationship of the gland to adjacent structures is important.

  8. On the Anatomy of Understanding

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Josephson, Anna

    2011-01-01

    In search for the nature of understanding of basic science in a clinical context, eight medical students were interviewed, with a focus on their view of the discipline of anatomy, in their fourth year of study. Interviews were semi-structured and took place just after the students had finished their surgery rotations. Phenomenographic analysis was…

  9. DAGAL: Detailed Anatomy of Galaxies

    NASA Astrophysics Data System (ADS)

    Knapen, Johan H.

    2017-03-01

    The current IAU Symposium is closely connected to the EU-funded network DAGAL (Detailed Anatomy of Galaxies), with the final annual network meeting of DAGAL being at the core of this international symposium. In this short paper, we give an overview of DAGAL, its training activities, and some of the scientific advances that have been made under its umbrella.

  10. Curriculum Guidelines for Microscopic Anatomy.

    ERIC Educational Resources Information Center

    Journal of Dental Education, 1993

    1993-01-01

    The American Association of Dental Schools' guidelines for curricula in microscopic anatomy offer an overview of the histology curriculum, note primary educational goals, outline specific content for general and oral histology, suggest prerequisites, and make recommendations for sequencing. Appropriate faculty and facilities are also suggested.…

  11. Functional Anatomy of the Shoulder

    PubMed Central

    Terry, Glenn C.; Chopp, Thomas M.

    2000-01-01

    Objective: Movements of the human shoulder represent the result of a complex dynamic interplay of structural bony anatomy and biomechanics, static ligamentous and tendinous restraints, and dynamic muscle forces. Injury to 1 or more of these components through overuse or acute trauma disrupts this complex interrelationship and places the shoulder at increased risk. A thorough understanding of the functional anatomy of the shoulder provides the clinician with a foundation for caring for athletes with shoulder injuries. Data Sources: We searched MEDLINE for the years 1980 to 1999, using the key words “shoulder,” “anatomy,” “glenohumeral joint,” “acromioclavicular joint,” “sternoclavicular joint,” “scapulothoracic joint,” and “rotator cuff.” Data Synthesis: We examine human shoulder movement by breaking it down into its structural static and dynamic components. Bony anatomy, including the humerus, scapula, and clavicle, is described, along with the associated articulations, providing the clinician with the structural foundation for understanding how the static ligamentous and dynamic muscle forces exert their effects. Commonly encountered athletic injuries are discussed from an anatomical standpoint. Conclusions/Recommendations: Shoulder injuries represent a significant proportion of athletic injuries seen by the medical provider. A functional understanding of the dynamic interplay of biomechanical forces around the shoulder girdle is necessary and allows for a more structured approach to the treatment of an athlete with a shoulder injury. PMID:16558636

  12. Orbital anatomy for the surgeon.

    PubMed

    Turvey, Timothy A; Golden, Brent A

    2012-11-01

    An anatomic description of the orbit and its contents and the eyelids directed toward surgeons is the focus of this article. The bone and soft tissue anatomic nuances for surgery are highlighted, including a section on osteology, muscles, and the orbital suspensory system. Innervation and vascular anatomy are also addressed.

  13. Anatomy of trisomy 12.

    PubMed

    Roberts, Wallisa; Zurada, Anna; Zurada-ZieliŃSka, Agnieszka; Gielecki, Jerzy; Loukas, Marios

    2016-07-01

    Trisomy 12 is a rare aneuploidy and fetuses with this defect tend to spontaneously abort. However, mosaicism allows this anomaly to manifest itself in live births. Due to the fact that mosaicism represents a common genetic abnormality, trisomy 12 is encountered more frequently than expected at a rate of 1 in 500 live births. Thus, it is imperative that medical practitioners are aware of this aneuploidy. Moreover, this genetic disorder may result from a complete or partial duplication of chromosome 12. A partial duplication may refer to a specific segment on the chromosome, or one of the arms. On the other hand, a complete duplication refers to duplication of both arms of chromosome 12. The combination of mosaicism and the variable duplication sites has led to variable phenotypes ranging from normal phenotype to Potter sequence to gross physical defects of the various organ systems. This article provides a review of the common anatomical variation of the different types of trisomy 12. This review revealed that further documentation is needed for trisomy 12q and complete trisomy 12 to clearly delineate the constellation of anomalies that characterize each genetic defect. Clin. Anat. 29:633-637, 2016. © 2016 Wiley Periodicals, Inc.

  14. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  15. Accounting for segment correlations in segmented gamma-ray scans

    SciTech Connect

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-08-01

    In a typical segmented gamma-ray scanner (SGS), the detector`s field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator`s low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector`s field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification.

  16. The future of gross anatomy teaching.

    PubMed

    Malamed, S; Seiden, D

    1995-01-01

    A survey of U.S. departments of anatomy, physiology, and biochemistry shows that 39% of the respondent anatomy departments reported declines in the numbers of graduate students taking the human gross anatomy course. Similarly, 42% of the departments reported decreases in the numbers of graduate students teaching human gross anatomy. These decreases were greater in anatomy than in physiology and in biochemistry. The percentages of departments reporting increases in students taking or teaching their courses was 6% for human gross anatomy and 0% to 19% for physiology and biochemistry courses. To reverse this trend the establishment of specific programs for the training of gross anatomy teachers is advocated. These new teachers will be available as the need for them is increasingly recognized in the future.

  17. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  18. Segmenting images analytically in shape space

    NASA Astrophysics Data System (ADS)

    Rathi, Yogesh; Dambreville, Samuel; Niethammer, Marc; Malcolm, James; Levitt, James; Shenton, Martha E.; Tannenbaum, Allen

    2008-03-01

    This paper presents a novel analytic technique to perform shape-driven segmentation. In our approach, shapes are represented using binary maps, and linear PCA is utilized to provide shape priors for segmentation. Intensity based probability distributions are then employed to convert a given test volume into a binary map representation, and a novel energy functional is proposed whose minimum can be analytically computed to obtain the desired segmentation in the shape space. We compare the proposed method with the log-likelihood based energy to elucidate some key differences. Our algorithm is applied to the segmentation of brain caudate nucleus and hippocampus from MRI data, which is of interest in the study of schizophrenia and Alzheimer's disease. Our validation (we compute the Hausdorff distance and the DICE coefficient between the automatic segmentation and ground-truth) shows that the proposed algorithm is very fast, requires no initialization and outperforms the log-likelihood based energy.

  19. A hybrid technique for medical image segmentation.

    PubMed

    Nyma, Alamgir; Kang, Myeongsu; Kwon, Yung-Keun; Kim, Cheol-Hong; Kim, Jong-Myon

    2012-01-01

    Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR) image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  20. Anatomy of Teaching Anatomy: Do Prosected Cross Sections Improve Students Understanding of Spatial and Radiological Anatomy?

    PubMed Central

    Vithoosan, S.; Kokulan, S.; Dissanayake, M. M.; Dissanayake, Vajira; Jayasekara, Rohan

    2016-01-01

    Introduction. Cadaveric dissections and prosections have traditionally been part of undergraduate medical teaching. Materials and Methods. Hundred and fifty-nine first-year students in the Faculty of Medicine, University of Colombo, were invited to participate in the above study. Students were randomly allocated to two age and gender matched groups. Both groups were exposed to identical series of lectures regarding anatomy of the abdomen and conventional cadaveric prosections of the abdomen. The test group (n = 77, 48.4%) was also exposed to cadaveric cross-sectional slices of the abdomen to which the control group (n = 82, 51.6%) was blinded. At the end of the teaching session both groups were assessed by using their performance in a timed multiple choice question paper as well as ability to identify structures in abdominal CT films. Results. Scores for spatial and radiological anatomy were significantly higher among the test group when compared with the control group (P < 0.05, CI 95%). Majority of the students in both control and test groups agreed that cadaveric cross section may be useful for them to understand spatial and radiological anatomy. Conclusion. Introduction of cadaveric cross-sectional prosections may help students to understand spatial and radiological anatomy better. PMID:27579181

  1. Classic versus millennial medical lab anatomy.

    PubMed

    Benninger, Brion; Matsler, Nik; Delamarter, Taylor

    2014-10-01

    This study investigated the integration, implementation, and use of cadaver dissection, hospital radiology modalities, surgical tools, and AV technology during a 12-week contemporary anatomy course suggesting a millennial laboratory. The teaching of anatomy has undergone the greatest fluctuation of any of the basic sciences during the past 100 years in order to make room for the meteoric rise in molecular sciences. Classically, anatomy consisted of a 2-year methodical, horizontal, anatomy course; anatomy has now morphed into a 12-week accelerated course in a vertical curriculum, at most institutions. Surface and radiological anatomy is the language for all clinicians regardless of specialty. The objective of this study was to investigate whether integration of full-body dissection anatomy and modern hospital technology, during the anatomy laboratory, could be accomplished in a 12-week anatomy course. Literature search was conducted on anatomy text, journals, and websites regarding contemporary hospital technology integrating multiple image mediums of 37 embalmed cadavers, surgical suite tools and technology, and audio/visual technology. Surgical and radiology professionals were contracted to teach during the anatomy laboratory. Literature search revealed no contemporary studies integrating full-body dissection with hospital technology and behavior. About 37 cadavers were successfully imaged with roentograms, CT, and MRI scans. Students were in favor of the dynamic laboratory consisting of multiple activity sessions occurring simultaneously. Objectively, examination scores proved to be a positive outcome and, subjectively, feedback from students was overwhelmingly positive. Despite the surging molecular based sciences consuming much of the curricula, full-body dissection anatomy is irreplaceable regarding both surface and architectural, radiological anatomy. Radiology should not be a small adjunct to understand full-body dissection, but rather, full-body dissection

  2. Anatomy of a Bird

    NASA Astrophysics Data System (ADS)

    2007-12-01

    Using ESO's Very Large Telescope, an international team of astronomers [1] has discovered a stunning rare case of a triple merger of galaxies. This system, which astronomers have dubbed 'The Bird' - albeit it also bears resemblance with a cosmic Tinker Bell - is composed of two massive spiral galaxies and a third irregular galaxy. ESO PR Photo 55a/07 ESO PR Photo 55a/07 The Tinker Bell Triplet The galaxy ESO 593-IG 008, or IRAS 19115-2124, was previously merely known as an interacting pair of galaxies at a distance of 650 million light-years. But surprises were revealed by observations made with the NACO instrument attached to ESO's VLT, which peered through the all-pervasive dust clouds, using adaptive optics to resolve the finest details [2]. Underneath the chaotic appearance of the optical Hubble images - retrieved from the Hubble Space Telescope archive - the NACO images show two unmistakable galaxies, one a barred spiral while the other is more irregular. The surprise lay in the clear identification of a third, clearly separate component, an irregular, yet fairly massive galaxy that seems to be forming stars at a frantic rate. "Examples of mergers of three galaxies of roughly similar sizes are rare," says Petri Väisänen, lead author of the paper reporting the results. "Only the near-infrared VLT observations made it possible to identify the triple merger nature of the system in this case." Because of the resemblance of the system to a bird, the object was dubbed as such, with the 'head' being the third component, and the 'heart' and 'body' making the two major galaxy nuclei in-between of tidal tails, the 'wings'. The latter extend more than 100,000 light-years, or the size of our own Milky Way. ESO PR Photo 55b/07 ESO PR Photo 55b/07 Anatomy of a Bird Subsequent optical spectroscopy with the new Southern African Large Telescope, and archive mid-infrared data from the NASA Spitzer space observatory, confirmed the separate nature of the 'head', but also added

  3. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  4. Iterative Vessel Segmentation of Fundus Images.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Parhi, Keshab K

    2015-07-01

    This paper presents a novel unsupervised iterative blood vessel segmentation algorithm using fundus images. First, a vessel enhanced image is generated by tophat reconstruction of the negative green plane image. An initial estimate of the segmented vasculature is extracted by global thresholding the vessel enhanced image. Next, new vessel pixels are identified iteratively by adaptive thresholding of the residual image generated by masking out the existing segmented vessel estimate from the vessel enhanced image. The new vessel pixels are, then, region grown into the existing vessel, thereby resulting in an iterative enhancement of the segmented vessel structure. As the iterations progress, the number of false edge pixels identified as new vessel pixels increases compared to the number of actual vessel pixels. A key contribution of this paper is a novel stopping criterion that terminates the iterative process leading to higher vessel segmentation accuracy. This iterative algorithm is robust to the rate of new vessel pixel addition since it achieves 93.2-95.35% vessel segmentation accuracy with 0.9577-0.9638 area under ROC curve (AUC) on abnormal retinal images from the STARE dataset. The proposed algorithm is computationally efficient and consistent in vessel segmentation performance for retinal images with variations due to pathology, uneven illumination, pigmentation, and fields of view since it achieves a vessel segmentation accuracy of about 95% in an average time of 2.45, 3.95, and 8 s on images from three public datasets DRIVE, STARE, and CHASE_DB1, respectively. Additionally, the proposed algorithm has more than 90% segmentation accuracy for segmenting peripapillary blood vessels in the images from the DRIVE and CHASE_DB1 datasets.

  5. Neural network for image segmentation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  6. Example-based segmentation for breast mass images

    NASA Astrophysics Data System (ADS)

    Huang, Qingying; Xu, Songhua; Luo, Xiaonan

    2013-03-01

    A new example-based mass segmentation algorithm is proposed for breast mass images. The training examples used in the new algorithm are prepared by three medical imaging professionals who manually outlined mass contours of 45 sample breast mass images. These manually segmented mass images are then partitioned into small regular grid cells, which are used as reference samples by the algorit